The scope of conversations in a primary care setting is massive. There are ~1,500 conditions primary care physicians typically diagnose and manage, but you’ve always got to be aware of and on the lookout for the other 10,000 or so conditions that are diagnosed and managed by specialists. To put it another way, PCPs manage the common, less complex conditions and specialists manage the more complicated common conditions plus the long tail of diseases that require more experienced specialized testing and procedures.
But there’s a reason why Google, when demoing their new AI-powered voice assistant chose to schedule an appointment. An appointment has two components:
- Why (a hair cut, hair coloring, etc.)
- When (Monday at 4pm)
With two variables, it’s a sexy demo that’s hard to screw up.
But what about when there are 10,000 variables with 10,000 outcomes?
This becomes infinitely complex.
Or what about when an answer is a spectrum of possibilities rather than a discreet answer?
Because that’s what happens often in primary care. There are countless times where even an experienced, talented doctor asks all the right questions and there really is no definitive diagnosis. It’s a bet you have with yourself. There’s an 80% chance it’s x, a 19% chance is y, and god help us if it’s that 1% chance it’s z. Asking more questions won’t solve this riddle. The only thing you as a doctor can do is treat what your guts says and wait it out and see how things evolve. And this is when relationships truly matter. You don’t want a new doctor every time you visit an exam room.
Or maybe the correct answer is one answer today, and another answer tomorrow?
There are common situations in primary care where you see a person with a set of complaints and findings but they don’t meet criteria for a certain diagnosis but, literally, tomorrow they will meet criteria. This means diagnoses are very much moving targets. And this is something lay people don’t understand. Sometimes they feel like they were misdiagnosed when they saw a doctor on Monday for a cough, but then on Thursday they see another doctor who diagnoses pneumonia.
The concept of time is a critical component of a diagnosis because we don’t really have diagnoses, we have stories.
A health condition is a story that plays out over time. Today’s cold is tomorrow’s pneumonia. And that’s why AI in general primary care is such a massively complex thing to tackle. It’s not only the answer to the question, it’s the answer to the question yesterday vs. the answer to the question today. Humans don’t really have a challenge understanding this concept, but machines currently do. And if Google isn’t yet ready to tackle this problem with their massive amounts of cash and almost infinite data, it’s perplexing why a little startup with $200M of VC money will be able to solve this problem, without malpractice, in the very short time frame necessary to return 10x on this investment.
So how should AI be used in primary care for the foreseeable future?
The most realistic and responsible thing to do is massively limit a patient’s options when taking a patient’s history. For example, Lemonaid limits a patient’s options to 13 conditions (and thank god doesn’t claim to be AI), whereas Babylon says “Ask Babylon” for anything and Babylon’s super smart AI will diagnose anything! This is setting that $200M Babylon Health up for a massive failure as it overpromises and significantly under-delivers putting every single patient user at risk.
Over the last 7 years, Sherpaa’s doctors have built ~300 standardized question sets to ask each patient who uses Sherpaa for the top 300 most common primary care symptoms. There’s no AI involved at all. It’s just a set of ~20-25 questions around, say, “Cough,” optimized for readability designed to cover all bases, rule out anything rare and serious, and help our real human doctors understand which rabbit hole to go down. With every standardized question set Sherpaa uses, we always ask one final question:
“What do you think you have?”
We do this because we know people have been googling and we want to understand their mindset and how best to talk with them about what we think you have. And I would guess the patient is right greater than 90% of the time. So…if you want to try to tackle anything like Babylon, start backwards. Tell me what you think you have…and my AI will disprove you. At least you’re starting with the likelihood that the patient will be 90% right. And, of course, this also proves that patients are the most underutilized resource in healthcare. They’re a lot smarter than most doctors think.