I’ve been a startup guy for 11 years this month. And over the years, I’ve heard cries of “show me the evidence!” and “that’ll never work!” on an almost daily basis. They come from randos in comments on LinkedIn, doctors in comments on WSJ articles, or potential employer customers looking for something to help them mitigate their increasing healthcare costs.
It goes something like this:
“It’s nice that you’re in the WSJ, but show me the evidence that this new model of primary care has increased patient satisfaction and clinical outcomes.”
So I’ve thought a lot about this over the years. Do I partner with academic folks in the space and publish a study in JAMA comparing new models of healthcare delivery to the traditional model? Do I produce an internal case study and seed it out to the tech/health press to get them to regurgitate it? What do I do to prove these ideas are better than the status quo?
First, what are the scientific limitations of studying innovations?
It takes a ton of money and time to produce a gold standard scientific study. The Framingham Heart Study has been decades-long and a few billion dollars have been spent producing almost zero statistically significant scientific findings on the relationship between diet and health. Science always has a control group and an intervention group. The more moving parts of an intervention, the harder to isolate what causes the effect. John Ioannidis, in one of my favorite articles about science ever, “Lies, Damn Lies, and Medical Science,” sums this up nicely:
But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.
Scientific studies looking at one nutrient have a challenge because of the network effect external things have on our own networked body.
So, for a new model of primary care, what is the equivalent of that nutrient? If it’s as big as “exam room office-based care” vs. “asynchronous, online care” combined with the infinitely large variety of personalities, lifestyle, and health status found in the population of patients in the study, would anyone believe the scientific validity of the findings? How would you even go about recruiting people to participate in the study at sufficient numbers? We all know startups have challenges getting off the ground. It’s not like startups can say “Just enroll 100,000 people in this here study and let’s get crackin’ on a 5 year well-designed, powerful study!”
Second, the data will be invariably biased due to incentives.
Most startups are founded by entrepreneurs with passion and vision who are willing to sacrifice money, time, and relationships to scale their idea. What are the chances they’ll let equivocal or negative findings from a year-long study get in the way of that dream? Again, from my favorite article:
“Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.”
So you’ve got an entrepreneur with dreams funded by investors with demands for speedy scale. There’s a well-established principle at play here:
“You can have good, fast, and cheap. But you can only pick two.”
Startups raise enough rounds for an 18-24 month runway/cash in the bank. This is the standard venture capital model. During this time, startups are expected to grow in user/clients/revenue and, if they are in healthcare, prove that their innovation is either better than the status quo or does what it says it’s going to do.
A healthcare startup under investor pressure is forced to choose “fast.” Then, they can either choose “good” or “cheap.” If they choose “good,” it’s going to be a hugely expensive build. If they choose “cheap” to build their company, quality will suffer and poor quality in healthcare cannot be tolerated. Of course, the ideal is having all three and new tools for startups that help them standardize operations is enabling this and it’s a wonderful thing.
Can a well-designed, powerful-enough study be completed in this time frame in an environment with strong incentives from both founders and investors to scale or die?
Here’s the answer: Every single funded healthcare startup to ever exist publishes data that shows their innovation has positive effects. This is an axiom. Because they can’t not.
So that leaves all entrepreneurs and investors in a pickle. Can any study from any startup be trusted? No.
So, should we still be producing these studies? It’s not gonna stop.
So what can we as entrepreneurs do about it?
First, tell your story to the industry, constantly.
Write every day. Produce videos. Do whatever it takes to give people an insight into the details of every single piece of your innovation. Publish data on your startup’s blog and be honest. Publish positive data. Publish data that shows you were wrong about some component of your innovation. You’re not always going to be right. An assumption you make will be wrong. Highlight those and show how you got to a place that was right. Because that’s the point of science— to get to the truth. And it’s ok to be wrong as long as those insights lead you to right. Prior to easy self-publishing, all we had were paper-based “peer-reviewed” journals (and with the rise of pay-to-play online “journals” and ad-driven content-producing “journals” even these old school journals are facing credibility issues), Now we can let readers in to the everyday process of iteration. With the right storytelling and raw honesty, startups can do what they do best (build something better over time) and stop fooling themselves that traditional scientific studies in journals are the gold standard for proof. Nowadays, it’s pretty clear they’re not.
Second, create a design-driven iterative experience and brand customers love.
Get users who rave about your service/product. Help them tell their story. And tell your story to them about who you are, what you do, and how you’re constantly iterating and improving with open, constantly-published, data that the industry geeks out about.
And finally, turn the question back on the questioner.
If a skeptic says “Show me the evidence that your new model of primary care produces better patient satisfaction and better clinical outcomes” ask them to produce the data that shows today’s patients are highly satisfied with today’s primary care experience and today’s office-based primary care delivers exceptional clinical outcomes. Because you’ve got to compare your innovation to something they consider the gold standard. I guarantee you, skeptics will be hard-pressed to find that data. This skepticism actually comes from a dark, fearful place, best described by Dave Eggers:
“Do not be critics, you people, I beg you. I was a critic and I wish I could take it all back because it came from a smelly and ignorant place in me, and spoke with a voice that was all rage and envy. Do not dismiss a book until you have written one, and do not dismiss a movie until you have made one, and do not dismiss a person until you have met them. It is a fuckload of work to be open-minded and generous and understanding and forgiving and accepting, but Christ, that is what matters. What matters is saying yes.”
Because healthcare got to where it is today partly because of reason and science, but more importantly, because of tradition and business model. American healthcare took many wrong turns decades ago. It’s deeply flawed. Doctors do things not because it’s scientifically the right thing to do with irrefutable proof, but because what they do is what’s always been done and it makes a ton of money:
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
— Upton Sinclair
But now, with today’s tools with insights from the more innovative industries, we have such a huge opportunity to right the wrongs of American healthcare.
Comparing an innovation born with today’s tech-enabled solutions to the status quo is literally comparing apples and oranges. When online, data-driven primary care competes with waiting rooms, paper, EMRs born in the 1980s, and fax machines, there’s just no way to compare. More importantly, why even try with ancient, flawed methods?
What can decision-makers do about this problem?
They can employ the “does this new innovation make rational sense” test. Here’s how this goes:
First, ask yourself if the innovation you’re assessing disrupts something that doesn’t make rational sense?
For example, does talking with a doctor online to see if you can be treated online before you go to an urgent care center make rational sense? Yes, it does. Does it make rational sense that doctors can bill for as much as they can do, rather than bill either a flat, transparent rate or the minimal amount of intervention that offers the best evidence-based outcomes at the lowest cost for you? No, that billing practice doesn’t make rational sense. It seems unfair. Ok, this passed the first step of the “rational sense” test.
Second, ask yourself if the innovation poses significant downside risk?
For example, does talking with a doctor online to see if you can be treated online before you go to an urgent care center pose a significant risk to the patient? No, it doesn’t. Nurse triage lines have existed for decades and they are extremely safe.
And, finally, ask yourself if the status quo has risks that we just accept because it’s the status quo?
Does the fact that people avoid care because they’re afraid it could cost them $5,000 instead of $50 have health risks? Yes, it’s well established that avoiding necessary care in legitimate times of need is harmful. Does the fact that someone can’t get a hold of their doctor for days or can’t get an appointment for 3 weeks have risk? Yes, just as avoiding care due to fear of the cost, inaccessible care causes health issues to worsen, and therefore, cost more to treat.
Am I suggesting abandoning the scientific method? Hell no. But I am suggesting that we update the scientific method and be far more open with our raw data. For example, if we just discovered the scientific method last year and it too was born with today’s tools, what would it look like?
This of course deals with “how do we prevent another Theranos?” If a solution is a point solution with a very high downside risk, like a yes/no blood test that determines life or death, startups should publish the raw data from their results in real-time and in an open-source way. They should summarize the results and show what they did to iterate and then comment on the results showing that their iteration had a positive effect on the raw data. It’s a real-time, open-source, steady stream of scientifically-policed data. Will this happen? Probably not. But it’s the right thing to do, especially when lives are at stake. This ain’t Snapchat. Entrepeneurs and investors shouldn’t get to treat healthcare like it is.