Creating an Infrastructure of Health Data to Support Amazon’s Leap into Healthcare

By CLAUDIA WILLIAMS Claudia Williams, Manifest MedEx, Amazon

Amazon has transformed the way we read books, shop online, host websites, do cloud computing, and watch TV. Can they apply their successes in all these other areas to healthcare?

Just last week, Amazon announced Comprehend Medical, machine learning software that digitizes and processes medical records. “The process of developing clinical trials and connecting them with the right patients requires research teams to sift through and label mountains of unstructured clinical record data,” Fred Hutchinson CIO Matthew Trunnell is quoted saying in a MedCity News article. “Amazon Comprehend Medical will reduce this time burden from hours to seconds. This is a vital step toward getting researchers rapid access to the information they need when they need it so they can find actionable insights to advance life-saving therapies for patients.”

Deriving insights from data and making those available in a user-friendly way to patients and clinicians is just what we need from technology innovators. But these tools are useless without data. If an oncology patient is hospitalized, her provider may not be informed of her hospitalization for days or even weeks (or ever). And the situation is repeated for that same patient receiving care from cardiologists, endocrinologists, and other providers outside of her oncology clinic. When it comes to personalized health and medicine, both the quantity and quality of data matter. Providers need access to comprehensive patient health data so they can accurately and efficiently diagnose and treat patients and make use of technology that helps them identify “actionable insights.”

Kate Sheridan wrote an article for STAT recently, saying, “Machine learning algorithms only work if they have data — lots and lots of data. The conclusions they draw about what might happen to a particular person will generally only work if an algorithm has been trained with a bunch of records of people who have some similar characteristics. And especially in the traditional ‘doctor’s office,’ companies that want to work on AI and machine learning must somehow pull in information from a plethora of EMRs.”

To drive better decisions, data needs to come from everywhere a patient is being seen, not a single facility or system, and then flow back out to the whole community. Value-based care requires healthcare to share, not just to get better at looking at their own data. Would Elon Musk’s self-driving cars be valuable if there were no interstate freeways to drive them on? Can a supercomputer do its work without an electrical grid?

How do we create this infrastructure of health data? I believe a nonprofit utility approach that releases data from silos and allows providers, hospitals, health plans, and health systems to access and use this information is the answer. The more organizations that participate in health data exchange, the more effective the network will be. The goal is for regional hospitals and small medical practices to be able to personalize their care, even something as simple as reminding patients to come in for a follow-up visit after being hospitalized.

It’s expensive and time-consuming to pull information “from a plethora of EHRs.” Sharing the cost and burden of that work and ensuring that data is available in a standardized way to everyone who needs it is the right path forward. The future for health data is bright — as long as we can build a reliable resource for health data so this kind of innovation can happen not just at Amazon but everywhere.

Claudia Williams serves as CEO of Manifest MedEx, California’s largest nonprofit health data network delivering real-time information to help healthcare providers care for millions of patients. Previously the senior advisor for health technology and innovation at the White House, Claudia helped lead President Obama’s Precision Medicine Initiative.

from THCB http://bit.ly/2VjcynX

Where is Relationship, Authority and Trust in Healthcare Today?

By HANS DUVEFELT MD Dr. Hans Duvefelt, A Country Doctor Writes, AI

Healthcare is on a different trajectory from most other businesses today. It’s a little hard to understand why.

In business, mass market products and services have always competed on price or perceived quality. Think Walmart or Mercedes-Benz, even the Model T Ford. But the real money and the real excitement in business is moving away from price and measurable cookie cutter quality to the intangibles of authority, influence and trust. This, in a way, is a move back in time to preindustrial values.

In primary care, unbeknownst to many pundits and administrators and unthinkable for most of the health tech industry, price and quality are not really even realistic considerations. In fact, they are largely unknown and unknowable.

The real price in primary care isn’t just the cost of each doctor visit. It is the cost of the total number of visits needed to solve a problem, and also the cost of the various tests, procedures and treatments each primary care doctor orders when solving that problem or managing a particular condition. This can vary enormously.

In Accountable Care Organizations, actual costs are compared to presumed or projected costs, which are based on Hierarchical Code Categories (see my post), which aren’t well known or commonly used by primary care doctors. To a degree, you can game this baseline cost calculation by mastering HCCs (Medicare Advantage plans’ financial well being hinges on making the most of this; this is why they offer doctors $150 to sign off on a list of each patient’s known or suspected expensive diagnoses).

Quality in healthcare is largely in the eye of the beholder. I’ve said it before and I’ll say it again here: A patient population’s immunization rates or aspirin use or non-use (depending on shifts in knowledge) are not comprehensive measures of quality. Accuracy of diagnosis, if anything, is. But who is measuring that? You might say “those who can’t practice medicine measure it”. That’s why most quality measures these days are of things you don’t need a medical degree or license to accomplish.

Primary care, in the eyes of our patients, is instead about relationship, authority, trust and (gasp) convenience. This is what people in most other businesses talk about all the time. It is what even tech and medicine pundits, EMR companies and many other middlemen want for themselves. They don’t want to be evaluated on the basis of price or quality standards set by others. Yet they want mass market medicine for the masses, not relationship based care.

Driving 200 miles between my two clinics, I often listen to audiobooks. Once I finished my Board Review, I turned to business books. “Influence”, “Authority”, “Brand”, “Story” and “Content” have replaced “Quality”, “Six Sigma” and “Excellence”. In business now, it is all about standing out and setting your own standards. It is about building relationships with and listening to consumers.

In healthcare, I see the paradox that insurers are now reaching out to patients to check up on them while at the same time making doctors work so hard and so fast producing “encounters” that there is less and less time for us to talk with our patients when we are with them, and never mind on the phone in between visits. Do they really think patients wouldn’t rather see their own doctors having enough breathing room to talk to them than have some strangers from out of state they never met calling to check in?

We have data that the doctor-patient relationship influences outcomes. From hospitalization rates to prescription adherence to effectiveness of treatments for mental health diagnoses, it is well known that the doctor is a large part of the treatment.

Doctors have increasingly become part of multicenter systems that, in spite of efforts like Patient Centered Medical Home recognition, simply have become too large and impersonal to foster the kind of customer relationships the business world is now realizing are necessary.

Between the bottom-line objectives of such healthcare organizations and the bureaucracies of health insurers, doctors and patients are clearly not in complete charge of their own relationships anymore.

So what happens with those relationship dependent outcomes when so many doctors feel like lineworkers, rather than professionals? What happens to their ability to nurture those relationships, gain that authority and earn that trust?

What happens if they lose it altogether?

There are modern, big companies who listen to their customers, even research and anticipate their customers’ needs. There are companies that empower their employees to solve customer problems, give refunds and do extras. There are companies who treat employees like owners or even offer them actual ownership.

Healthcare could do some more of that.

But there is more, lest we forget: Doctors aren’t just employees.

Who has the license to practice medicine? Who places the needle or scalpel? Who selects the medication? Who says “I’m sorry, we did everything we could” or “Congratulations, it’s a beautiful baby girl”?

Salespeople, YouTube stars and business leaders give a lot of thought to their customer relationships, their personal authority and the essentials of building and maintaining trust.

Are we doctors doing enough of that? Those things are ours to claim, and to strive for. Even if a big corporation issues our paycheck.

Hans Duvefelt is a Swedish-born rural Family Physician in Maine. This post originally appeared on his blog, A Country Doctor Writes, here

from THCB http://bit.ly/2AlJVhg

Is CareMore Health’s Population Health Management Model Disruptive?

By REBECCA FOGG Rebecca Fogg

Fueled by Americans’ urgent need for better chronic disease care and insurers’ march from fee-for-service to value-based payments, innovation in population health management is accelerating across the health care industry. But it’s hardly new, and CareMore Health, a recent acquisition of publicly-traded insurer Anthem, has been on the vanguard of the trend for over twenty years.

CareMore Health provides coordinated, interdisciplinary care to high-need patients referred by primary care physicians in nine states and Washington, D.C. The care encompasses individualized prevention and chronic disease management services and coaching, provided on an outpatient basis at CareMore’s Care Clinics. It also includes oversight of episodic acute care, via CareMore “extensivists” and case managers who ensure effective coordination across providers and care sites before, during and after patient hospitalizations.

The majority of CareMore patients are covered by Medicare Advantage or Medicaid, and company-reported results, as well as a Commonwealth Fund analysis, indicate that the patient-centered, relationship-based model leads to fewer emergency room visits, specialist visits and hospitalizations for segments of the covered population. They also suggest that it leads to cost efficiencies relative to comparable plans in its markets of operation.

Clearly, CareMore is an innovator worth watching. But does its offering have the potential to disrupt America’s traditional, episodic, acute care delivery model? We put it to the test with six questions for identifying a Disruptive Innovation.

  1. Does it target people whose only alternative is to buy nothing at all (nonconsumers) or who are overserved by existing offerings in the market?
    Yes. Disruptive Innovations often initially take root among nonconsumers or those who are overserved—paying for more functionality in a product or service than they want or need. CareMore is targeting nonconsumers—patients with complex, long-term care needs that traditional primary care providers usually lack the time, money and/or expertise to address.
  1. Is the offering not as good as existing offerings as judged by historical measures of performance?
    No;
    results indicate that it is actually better care for the targeted segment of patients. However, this doesn’t necessarily mean that the care model isn’t disruptive, as not all disruptive strategies improve affordability and convenience at the expense of performance according to traditional standards.
  2. Is the innovation simpler to use, more convenient, or more affordable than existing offerings?
    CareMore’s Care Clinics are conveniently located in neighborhoods where their patients are concentrated, and providers help patients overcome barriers to health ranging from individual behavior to social needs. And as referenced above, the company is apparently able to deliver services in a more cost-effective manner vs. relevant benchmarks.
  3. Does the offering have a technology that enables it to improve and move upmarket?
    For a Disruptive Innovation to transform an industry, it must be able to grow, and profitably. This means it ultimately needs to win over customers who used to be perfectly satisfied with existing solutions. CareMore leverages various technology solutions to improve care, and facilitate care of an expanding patient population, potentially including such customers. These include dashboards synthesizing data from numerous sources to yield unique insights into care improvement opportunities, and applications to facilitate collaboration across providers and care sites.
  4. Is the technology paired with an innovative business model that allows it to be sustainable?
    Yes, with caveats.
    CareMore’s innovative care model, and its profit formula dependent on Medicare Advantage and Medicaid managed care programs, work together to promote sustainability. That’s because those programs are designed to reward precisely the kind of results that the CareMore model is designed to deliver. However, there’s no guarantee that current incentives will remain in place, and material changes could impact CareMore’s sustainability.
  5. Are existing providers motivated to ignore the new innovation and not feel threatened by it at the outset?
    It depends.
    The many existing providers still rooted in traditional care models and fee-for-service profit formulas might ignore CareMore’s innovative model, given it targets consumers they have not historically been able to serve effectively and/or profitably. Still, many other existing providers are actively seeking better ways to address chronic disease, and such players are likely not only to feel threatened, but to fight aggressively to protect their own prospects. This presents a challenge for CareMore in fulfilling its disruptive potential, but not a necessarily insurmountable one.

Verdict: Based on strict application of the Theory of Disruptive Innovation and what we know about CareMore from public information, the company has strong potential to become a disruptive force in the markets it serves. But stay tuned, as innovation in this population health space is accelerating, and therefore competition. While that should be a win for consumers in any case, it means the title of ultimate disruptor is still up for grabs.

Rebecca Fogg is a senior research fellow at the Clayton Christensen Institute, where she studies business model innovation in health care delivery, including new approaches to population health management and person-centered care.

from THCB http://bit.ly/2QPNM0h

Statistical Certainty: Less is More

By ANISH KOKA MD 

The day after NBC releases a story on a ‘ground-breaking’ observational study demonstrating caramel macchiatas reduce the risk of death, everyone expects physicians to be experts on the subject. The truth is that most of us hope John Mandrola has written a smart blog on the topic so we know intelligent things to tell patients and family members.

A minority of physicians actually read the original study, and of those who read the study, even fewer have any real idea of the statistical ingredients used to make the study. Imagine not knowing whether the sausage you just ate contained rat droppings. At least there is some hope the tongue may provide some objective measure of the horror within.

Data that emerges from statistical black boxes typically have no neutral arbiter of truth. The process is designed to reveal from complex data sets, that which cannot be readily seen. The crisis created is self-evident: With no objective way of recognizing reality, it is entirely possible and inevitable for illusions to proliferate.

This tension has always defined scientific progress over the centuries as new theories overturned old ones. The difference more recently is that modern scientific methodology believes it possible to trade in theories for certainty. The path to certainty was paved by the simple p value. No matter the question asked, how complex the data set was, observational or randomized, p values < .05 mean truth.

But even a poor student of epistemology recognizes that all may not be well in Denmark with regards to the pursuit of truth in this manner. Is a p value of .06 really something utterly different from a p value of .05?  Are researchers bending to the pressures of academic advancement or financial inducements to consciously or unconsciously design trials that give us p values <.05?

The slow realization the system may not be working comes from efforts to replicate studies. Methodologist guru Brian Nosek convinced 270 of his psychology colleagues in 2015 to attempt to replicate 100 prior published trials.  Only 36% of the studies gave the same result as the original.  Imagine the consternation if an apple detaching from a tree only fell to the ground 36% of the time.

Why this is happening is a fascinating question that forms the subject of Nosek’s most recent published paper that focuses on the statistical black box data is fed into.

29 statistical teams aggregated via Twitter were given one complex dataset and tasked with finding out if football player skin tone had anything to do with referees awarding red cards. The goal was to put the statistical methods to the test. If you give the same question and data to 29 different teams, does the analysis result in the same answer?

In the forest plot summarizing the findings, the results of the 29 teams do not, at first glance, appear to be remarkably different.  The majority of teams get the same qualitative answer by being on the ‘right’ side of the magical p of 0.05 threshold, though I imagine the vast number of consumers of medical evidence would be surprised to find that depending on the statistical model employed, the likelihood of the sky being blue is ~70%.  More discriminating readers will ignore the artificial cliff dividing blue from not blue to point out the wide overlap in confidence intervals that suggest the same basic answer was arrived at with minimal beating around the bush.

But a review of the meticulous steps taken by the project managers of the study demonstrate the convergence of the results is somewhat of an engineered phenomenon.  After collection of the data set and dissemination of the data to the statistical teams, the initial approaches the teams took were shared among the group.  Each team then received feedback on their statistical approach and had the opportunity to adjust their analytic strategy. Feedback incorporated, the teams ran the data through their selected strategies, and the results produced were again shared among all the teams.

The idea of the various steps taken, of course, was not to purposefully fashion similar outputs for the trial, but to simulate a statistically rigorous peer review that I’m told is rare for most journals. Despite all the feedback, collaboration and discussion, 29 teams ended up using 21 unique combinations of co-variates.  Apparently statisticians choosing analytic methods are more Queer Eye for the Straight Guy, less HAL. Sometimes the black pants go with that sequin top, other nights only the feather boa completes the outfit.

The findings were boring to most statisticians, but titillating to most clinicians. The statistical criticism is a little unfair. It is certainly true that the problem of analysis-contingent results isn’t completely novel. Simonsohn et. al. use the phrase p-hacking to describe unethical researchers throwing line after line into a dataset to find statistically significant associations.

Gelman and Lokens argue this is a simplistic frame that describes the minority of researchers. What they believe to be far more common and concerning are researchers embarking on projects with strong pre-existing biases consciously or unconsciously choosing analytic paths that end up confirming their biases. This problem has been attractively described as the garden of forking paths.

The current project fits into neither one of these buckets. The researchers had no incentive to get a statistically significant result because publishing wasn’t dependent on getting a p < .05. And this particular data set had a limited number of forking paths to traverse because the question asked of the data set was specific – red cards and skin tone. The teams couldn’t choose to look at the interaction of yellow cards and GDP of player home countries, for instance. And perhaps most importantly, the teams were not particularly motivated to arrive at an answer as confirmed by a survey completed at the start of the trial.

Implications of this study loom especially large for healthcare, where policy making has so far been the provenance of enlightened academics who believe a centrally managed well-functioning technocracy is the best way to manage the health needs of the nation.

The only problem is that the technocrats have so far excelled mostly at failing spectacularly. Public reporting of cardiovascular outcomes was supposed to penalize poor performers, and reward those that excelled. Instead, it resulted in risk aversion by physicians which meant fewer chances for the sickest patients who most needed help. The Hospital Readmission Reduction Program (HRRP)  was supposed to focus the health system on preventable readmissions. The health system responded by decreasing readmissions at the expense of higher mortality.

One of the problems with most health policy research – highlighted in a recent NEJM perspective – is that it largely rests on analyses of observational data sets of questionable quality.  What isn’t mentioned is that the conclusions made about policy can depend on who you ask.

This won’t surprise Andrew Gelman or Brian Nosek, but the health policy researchers responsible for devising the HRRP program publishes repeatedly in support of their stance that reduced admissions as a consequence of the program is not correlated with higher heart failure mortality, while cardiologists who take care of heart failure patients produce data that traces heart failure mortality to initiation of the HRRP program. Who to believe?

In their NEJM perspective, Bhatt, and Wadhera don’t mention this divide, but do call for better research that will migrate the health care landscape from “idea based policy” to “evidence based policy”. The solutions lie in natural randomized trials, and where the data sets won’t comply, use the $1 billion/year budget of the Center for Medicare and Medicaid Innovations (CMMI) to run mandatory policy RCTs in small groups before broad rollout of policy to the public. This perspective is as admirable as it is short sighted and devoid of context.

Randomized control trials are difficult to do in this space. But even if RCTs could be done, would it end debate? RCTs may account for covariates but, as discussed, this is just one source of variation when analyzing data. Last I checked, cardiologists with the benefit of thousands of patients worth of RCTs continue to argue about statins, fish oil, and coronary stents, and these areas are completely devoid of political considerations.

The Oregon experiment, one of the largest, most rigorous RCTs of Medicaid expansion, hasn’t ended debate between conservatives and liberals on whether the nation should expand health coverage in this fashion. And nor should it. Both sides may want to stop pretending that the evidence will tell us anything definitively.  Science can tell us the earth isn’t flat, it won’t tell us if we should expand Medicaid.

Evidence has its limits. Health care policy research for now remains the playground of motivated researchers who consciously, or unconsciously produce research confirming their biases. Indeed, the mistake that has powered a thousand ProPublica articles on conflict of interest isn’t that financial conflicts aren’t important, it’s that concentrating on only one bias is really dumb.

And Nosek’s team clearly demonstrates that even devoid of bias, a buffet of results are bound to be produced with something palatable for every ideology. The path forward suggested by some in the methodologist community involves crowd-sourcing all analysis where possible. While palate pleasing, this seems an inefficient, resource heavy enterprise that still leaves one with an uncertain answer.

I’d settle for less hubris on the part of researchers who would seem to think an answer lives in every data set. Of the 2,053 total players in Nosek’s football study, photographs were only available for 1500 players. No information was available on referee skin tone – a seemingly relevant piece of data when trying to assess racial bias.

Perhaps the best approach to certain research questions is to not try to answer them. There is no way to parse mortality in US hospitals on the basis of physician gender, but someone will surely try and, remarkably, feel confident enough to attach a number to the thousands of lived saved if there were no male physicians.

If the point of applying empiricism to the social sciences was to defeat ideology with a statistically powered truth machine, empiricism has fallen well short. Paradoxically, salvation of the research enterprise may lie in doing less research and in imbuing much of what’s published with the uncertainty it well deserves.

Anish Koka is a cardiologist in private practice in Philadelphia.  He can be followed on Twitter @anish_koka. This post originally appeared here on The Accad & Koka Report. 

from THCB http://bit.ly/2V7i6lD

Are Bipartisan Agreements on Health Care Possible?

By KEN TERRY Ken Terry, bipartisanship, competition

Republicans and Democrats are seen as poles apart on health policy, and the recent election campaign magnified those differences. But in one area—private-sector competition among healthcare providers—there seems to be a fair amount of overlap. This is evident from a close reading of recent remarks by Health and Human Services Secretary Alex Azar and a 2017 paper from the Brookings Institution.

Azar spoke on December 3 at the American Enterprise Institute (AEI), the conservative counterpart to the liberal-leaning Brookings think tank. Referring to a new Trump Administration report on how to reduce healthcare spending through “choice and competition,” Azar said that the government can’t just try to make insurance more affordable while neglecting the underlying costs of care. “Healthcare reform should rely, to the extent possible, on competition within the private sector,” he said.

This is pretty close to the view expressed in the Brookings paper, written by Martin Gaynor, Farzad Mostashari, and Paul B. Ginsburg. “Ensuring that markets function efficiently is central to an effective health system that provides high quality, accessible, and affordable care,” the authors stated. They then proposed a “competition policy” that would require a wide range of actions by the federal and state governments.

The major difference between Azar and the Brookings experts is that Azar blames government regulation and they blame the consolidation of healthcare systems, most of all, for the relative lack of competition in healthcare markets. The parties agree, however, that healthcare prices are primarily responsible for driving up costs. U.S. healthcare spending rose from 2012 to 2016, Azar said, not because of higher utilization, but because of higher prices.

Azar favors consumer-driven plans coupled with health savings accounts to give patients an incentive to seek value and thereby reduce costs. But, while these plans have reduced unnecessary spending in the private market, he said, they haven’t been a silver bullet for private-sector employers that have used them. These plans can reduce unnecessary spending, but financial incentives for patients won’t build a truly competitive market, he said.

What’s needed, in his view, is less government regulation that drives provider prices up. For example, he said, CMS’s hospital outpatient payment system had impeded competition by paying hospital-owned practices more than private practices for the same services. He praised CMS for switching to site-neutral payments for all services offered by both hospital-employed and private practice physicians.

The Brookings experts supported the same regulatory change before CMS made it. In their report, they also said that states should repeal certificate-of-need laws, which inhibit new market entrants. And they favored reforming the 340b drug discount program for hospitals, which they said encouraged the facilities to employ physicians, especially oncologists who administer very expensive drugs. (CMS recently lowered those discounts by nearly 30%.)

Consolidation most to blame

However, the Brookings experts emphasized the role of industry consolidation in the steep rise of healthcare prices. Noting that these prices vary tremendously across the country, the report authors said that market power drives much of this variation. “Hospitals with little effective competition can extract higher prices in their negotiations with insurers, and they do,” they noted. “Hospitals without local competitors are estimated to have prices nearly 16 percent higher on average than hospitals with four or more competitors, a difference of nearly $2,000 per admission.”

Mergers of insurance companies have also driven up health costs, the paper said. The market share of the top four national insurers grew from 74% in 2006 to 83% in 2014. In the median state, the two largest carriers now control two-thirds of the market.

But that horse is out of the barn—and in any case, insurers don’t have much choice but to accede to the dominant healthcare providers in some markets. So the Brookings competition proposal focuses mainly on what can be done to increase competition among providers.

Among their more significant proposals, the Brookings experts would:

  • Have federal and state agencies increase antitrust scrutiny of horizontal and vertical healthcare mergers
  • Ask Congress to pass legislation allowing the Federal Trade Commission (FTC) to enforce antitrust laws on healthcare providers and insurers
  • Revise the Medicare Shared Savings Program (MSSP) regulations to limit the influence of large hospitals and health systems over accountable care organizations (ACOs)

Neither the Brookings paper nor Azar talked about taking two other steps that could promote healthcare competition: 1) liberate doctors from the grip of hospitals; and 2) replace competition between insurers with competition between physician-led entities, such as ACOs and primary care-oriented medical groups. But those radical changes would be impossible without universal health coverage, and it’s unlikely that Congress will vote for that unless Democrats win back the Senate and the White House in 2020.

In a recent Health Affairs forum on health policy ideas for 2020 Presidential candidates, the liberal and conservative panelists agreed that Congress is not likely to do much on healthcare in the next two years. But in the area of competition, perhaps the left and the right can work together on some reforms if they don’t get into a tussle over Medicare for All. Some kind of legislation to lower drug prices is already in the air, and it looks like the two parties could also have a meeting of the minds on promoting competition in healthcare.

The Democrats aren’t going to suddenly support consumer-driven plans, which they view as discriminating against poor and sick people. But perhaps they might be open to changes in the Stark and AKA regulations to allow market-based innovations, and they might be willing to encourage states to throw out certificate-of-need laws. Similarly, the Republicans won’t be thrilled about the idea of a massive antitrust crackdown on healthcare mergers. But they might let CMS release practice-level cost and quality data that can help ACOs form networks and help consumers choose providers.

At least the two sides seem to agree on one thing: Only real competition among providers can constrain healthcare spending.

Ken Terry is a former senior editor of Medical Economics and is author of Rx For Healthcare Reform (Vanderbilt University Press, 2007).

from THCB http://bit.ly/2SklbNg

Evidence-Based Satire

By SAURABH JHA

Sequels generally disappoint. Jason couldn’t match the fear he generated in the original Friday the 13th. The sequel to the Parachute, a satirical piece canvassing PubMed for randomized controlled trials (RCTs) comparing parachutes to placebo, matched its brilliance, and even exceeded it, though the margin can’t be confirmed with statistical significance. The Parachute, published in BMJ’s Christmas edition, will go down in history with Jonathan Swift’s Modest Proposal and Frederic Bastiat’s Candlemakers’ Petition as timeless satire in which pedagogy punched above, indeed depended on, their absurdity.

In the Parachute, researchers concluded, deadpan, that since no RCT has tested the efficacy of parachutes when jumping off a plane, there is insufficient evidence to recommend them. At first glance, the joke was on RCTs and those who have an unmoored zeal for them. But that’d be a satirical conclusion. Sure, some want RCTs for everything, for whom absence of evidence means no evidence. But that’s because of a bigger problem which is that we refuse to acknowledge that causality has degrees, shades of gray, yet causality can sometimes be black and white. Somethings are self-evident.

In medicine, causation, even when it’s not correlation, is often probabilistic. Even the dreaded cerebral malaria doesn’t kill everyone. If you jump from a plane at 10, 000 feet without a parachute death isn’t probabilistic, it is certain. And we know this despite the absence of rigorous empiricism. It’s common sense. We need sound science to tease apart probabilities, and grayer the causality the sounder the empiricism must be to accord the treatment its correct quantitative benefit, the apotheosis of this sound science being an RCT. When empiricism ventures into certainties, it’s no longer sound science. It is parody.

If the femoral artery is nicked and blood spurts to the ceiling more forcefully than Bellagio’s fountains you don’t need an RCT to make a case for stopping the bleeding, even though all bleeding stops, eventually. But you do need an RCT if you’re testing which of the fine sutures at your disposal is better at sewing the femoral artery. The key point is the treatment effect – the mere act of stopping the bleed is a parachute, a huge treatment effect, which’d be idiotic to test in an RCT. Improving on the high treatment effect, even or particularly modestly, needs an RCT. The history of medicine is the history of parachutes and finer parachutes. RCTs became important when newer parachutes allegedly became better than their predecessors.

The point of the parachute satire is that the obvious doesn’t need empirical evidence.  It is a joke on non-judgmentalism, or egalitarianism of judgment, on the objectively sincere but willfully naïve null hypothesis where all things remain equally possible until we have data.

There has been no RCT showing that cleaning one’s posterior after expulsion of detritus improves outcomes over placebo. This is our daily parachute. Yet some in the east may justifiably protest the superiority of the Occidental method of cleaning over their method of using hand and water without a well-designed RCT. Okay, that’s too much information. Plus, I’m unsure such an RCT would even be feasible as the cross over rate would be so high that no propensity-matching will adjust for the intention to wipe, but you get my drift.

The original Parachute satire is now folklore with an impressive H-index to boot. That it has been cited over thousand times is also satirical – the joke is on the H-index, a seriously flawed metric which is taken very seriously by serious academics. But it also means that to get a joke into a peer review publication you need to have a citation for your joke! The joke is also on the criminally unfunny Reviewer 2.

The problem with the parachute metaphor is that many physicians want their pet treatment, believing it to be a parachute, to be exempt from an RCT. This, too, is a consequence of non-judgmentalism, a scientific relativism where every shade of gray thinks it is black and white. One physician’s parachute is another physician’s umbrella. This is partly a result of the problem RCTs are trying to solve – treatment effects are probabilistic and when the added margins are so small, parachutes become difficult to disprove with certainty. You can’t rule out a parachute.

Patient: Was it God who I should thank for saving me from cardiogenic shock?

Cardiologist: In hindsight, I think it was a parachute.

Patient: Does this parachute have a name?

Cardiologist: We call it Impella.

Patient: Praise be to the Impella.

Cardiologist: Wait, it may have been the Swan Ganz catheter. Perhaps two parachutes saved you. Or maybe three, if we include Crestor.

The problem with RCTs is agreeing on equipoise – a state of genuine uncertainty that an intervention has net benefits. Equipoise is a tricky beast which exposes the parachute problem. If two dogmatic cardiac imagers are both certain that cardiac CT and SPECT, respectively, are the best first line test for suspected ischemia, then there’s equipoise. That they’re both certain about their respective modality doesn’t lessen the equipoise. That they disagree so vehemently with each other merely confirms equipoise. The key point is that when one physician thinks an intervention is a parachute and the other believes it’s an umbrella, there’s equipoise.

Equipoise, a zone of maximum uncertainty, is a war zone. We disagree most passionately about smallest effect sizes. No one argues about the efficacy of parachutes. To do an RCT you need consensus that there is equipoise. But the first rule of equipoise is that some believe there’s no equipoise – this is the crux of the tension. You can’t recruit cardiac imagers to a multi-center RCT comparing cardiac CT to SPECT if they believe SPECT is a parachute.

Consensus inevitably drifts to the lowest common denominator. As an example, when my family plans to eat out there’s fierce disagreement between my wife – who likes the finer taste of French cuisine, my kids – whose Americanized palate favors pizza, and me – my Neanderthalic palate craves goat curry. We argue and then we end up eating rice and lentils at home. Consensus is an equal opportunity spoil sport.

Equipoise has become bland and RCTs, instead of being daring, often recruit the lowest-risk patients for an intervention. RCTs have become contrived show rooms with the generalizability of Potemkin villages. Parachute’s sequel was a multi-center RCT in which people jumping from an aircraft were randomized to parachutes and backpack. There was no crossover. Protocol violation was nil but there was a cheeky catch. The aircraft was on the ground. Thus, the first RCT of parachutes, powered to make us laugh, was a null trial.

Point taken. But what was their point? Simply put, parachutes are useless if not needed. The pedagogy delivered was resounding precisely because of the absurdity of the trial. If you want to generalize an RCT you must choose the right patients, sick patients, patients on whom you’d actually use the treatment you’re testing. You must get your equipoise right. That was their point, made brilliantly. The joke wasn’t on RCTs; the joke was on equipoise. Equipoise is now the safest of safe spaces; college, joke-phobic, millennials would be envious. Equipoise is bollux.

The “Parachute Returns” satire had a mixed reception with audible consternation in some quarters. Though it may just be me and, admittedly, I find making Germans laugh easier than Americans, I was surprised by the provenance of the researchers, who hailed from Boston, better known for serious quantitative social engineers than stand-up quantitative comedians. Satire is best when it mocks your biases.

The quantitative sciences have become parody even, or particularly, when they don’t intend satire. An endlessly cited study concluded that medical errors are the third leading cause of death. The researchers estimated the national burden of medical errors from a mere thirty-five patients; it was the empirical version of feeding the multitude – the story from the New Testament of feeding of 5000 from five loaves and two breads. How can one take researchers seriously? I couldn’t. I had no rebuttal except satire.

In the age of unprecedented data-driven rationalism satire keeps judgment alive. To be fair, the statisticians, the gatekeepers of the quantitative sciences, have a stronger handle on satire than doctors. The Gaussian distribution has in-built absurdity. For example, because height follows a normal distribution, and the tails of the bell-shaped curve go on and on, a quantitative purist may conclude there’s a non-zero chance that an adult can be taller than a street light; it’s our judgment which says that this isn’t just improbable but impossible. Gauss might have pleaded – don’t take me literally, I mean statistically, I’m only an approximation.

A statistician once showed that the myth of storks delivering babies can’t empirically be falsified. There is, indeed, a correlation in Europe between live births and storks. The correlation coefficient was 0.62 with a p-value of 0.008. Radiologists would love to have that degree of correlation with each other when reading chest radiographs. The joke wasn’t on storks but simple linear regression, and for all the “correlation isn’t causation” wisdom, the pedagogic value of “stork deliver babies” is priceless.

If faith began where our scientific understanding ended, satire marks the boundaries of statistical certainty. Satire marks no-go areas where judgment still reigns supreme; a real estate larger than many believe. The irony of uncertainty is that we’re most uncertain of the true nature of treatment differences when the differences are the smallest. It’s easy seeing that Everest is taller than Matterhorn. But it takes more sophisticated measuring to confirm that Lhotse is taller than Makalu. The sophistication required of the quantitative sciences is inversely proportional to the effect size it seeks to prove. It’s as if mathematics is asking us to take a chill pill.

The penumbra of uncertainty is an eternal flame. Though the conventional wisdom is that a large enough sample size can douse uncertainty, even large n’s create problems. The renowned psychologist and uber researcher Paul Meehl conjectured that as the sample size approaches infinity there’s 50 % chance that we’ll reject the null hypothesis when we shouldn’t. With large sample sizes everything becomes statistically significant. Small n increases uncertainty and large n increases irrelevance. What a poetic trade-off! If psychology research has reproducibility problems, epidemiology is one giant shruggie.

When our endeavors become too big for their boots satire rears its absurd head. Satire is our check and balance. We’re trying to get too much out of the quantitative sciences. Satire marks the territory empiricism should stay clear of. If empiricism befriended satire it could be even greater, because satire keeps us humble.

The absurd coexists with the serious and like pigs and farmers resembling each other in the closing scene of Animal Farm, it’s no longer possible to tell apart the deservedly serious from the blithering nonsense. And that’s why we need satire more than ever.

Congratulations to the BMJ for keeping satire alive.

Merry Christmas.

About the Author

Saurabh Jha is a frequent author of satire, and sometimes its subject. He can be reached on Twitter @RogueRad

from THCB http://bit.ly/2AboejM

Health in 2 Point 00, Episode 63

Today on Health in 2 Point 00, Jess and I get festive for the holidays. In this episode, Jess asks me about Walgreens and its new partnership with FedEx for next day prescription delivery and with Verily to help patients with prescription adherence. She also asks me about blockchain startup PokitDok getting its assets acquired by Change Healthcare. Lots of job changes are happening as well. Amy Abernethy, the chief medical officer at Flatiron Health, was named Deputy Commissioner of the FDA. Rasu Shrestha, who was previously at the University of Pittsburgh Medical Center, is the new chief strategy officer of Atrium Health. Finally, Zane Burke, who recently stepped down as president of Cerner, was just hired as Livongo’s new CEO, while Glen Tullman remains executive chairman of the company. Dr. Jennifer Schneider was also promoted from the company’s chief medical officer to president. We have one more episode of Health in 2 Point 00 for 2018, so be on the lookout for our year-end wrap-up. —Matthew Holt

from THCB http://bit.ly/2R8YMFs

CMS Should Boost the Signal on Social Determinants of Health

By HERB KUHN Herb Kuhn, Missouri Hospital Association, Social Determinants of Health

Historically, the Centers for Medicare & Medicaid Services’ (CMS) stance on the influence that social determinants of health (SDOH) have on health outcomes has been equal parts signal and noise. In April 2016, the agency announced it would begin adjusting the Medicare Advantage star ratings for dual-eligibility and other social factors. This was amid calls for increased equity in the performance determinations from the managed care industry. At the same time, CMS continued to refuse risk-adjustment for SDOH in the Hospital Readmissions Reduction Program (HRRP) despite the research supporting the influence of these factors on the HRRP.

It wasn’t until Congress interceded with the 21st Century Cures Act that CMS conceded to adjusting for dual-eligibility under the new stratified approach to determining HRRP penalties beginning in fiscal year 2019. The new methodology compares hospital readmission performance to peers within the same quintile of dual-eligible payer mix. The debate surrounding the adjustment of incentive-based performance metrics for SDOH likely is to continue, as many feel stratification is a step in the right direction, albeit a small one. And importantly, the Cures Act includes the option of direct risk-adjustment for SDOH, as deemed necessary by the Secretary of Health and Humans Services.

SDOH are defined as “the conditions in which people are born, grow, live, work and age.”  The multidimensional nature of SDOH reach far beyond poverty, requiring a systemic approach to effectively moderate their effects on health outcomes. The criteria used to identify SDOH include factors that have a defined association with health, exist before the delivery of care, are not determined by the quality of care received and are not readily modifiable by health care providers.

The question of modifiability is central to the debate. In the absence of reimbursement for treating SDOH, providers lack the resources to modify health outcomes attributable to social complexities. Therefore, statistical adjustments are needed to account for differences in these complexities to ensure risk-adjusted performance comparisons of hospitals are accurate.

The hospital community is deeply encouraged by the noise reduction that CMS has recently provided by signaling steps toward direct reimbursement for the treatment of SDOH. In February 2018, they announced a major policy shift, enabling added flexibility for MA plans through supplemental benefits that allow reimbursement for nontraditional goods and services, such as transportation, groceries and air conditioning.  And, while early evidence shows that the uptake of the supplemental benefits by the MA plans has been limited, an expanded “whole person” model is being developed through the Center for Medicare & Medicaid Innovation. The model would allow for housing, utilities and nutrition assistance, among others, that eventually could be scaled to cover socially complex fee-for-service beneficiaries. As HHS Secretary Alex Azar stated in a recent speech, “What if we provided solutions for the whole person, including addressing housing, nutrition and other social needs?”

Expanded services for patients with social complexity will require more nuanced data than are currently available. Standardized administrative data sources typically are limited to information on race, ethnicity, disability and dual-eligibility. However, the actual dimensions of social complexity — and their known association with health outcomes — are far more expansive.

There is good news on the data side of the equation. The conversion to the 10th revision of the International Classification of Diseases (ICD-10) in October 2015 created an opportunity for physician and non-physician providers to identify, diagnose and document patients with social complexity in a uniform diagnostic and billing data system.

Recent analysis of Missouri Hospital’s codes by the Missouri Hospital Association found the distribution and predictive characteristics of the 87 ICD-10 SDOH codes suggests the potential for a large advancement in the identification and documentation of social complexity for clinical applications. This includes including filling informational gaps at the patient level for risk adjustment, clinical support and population health. However, significant work will be required to expand awareness and uniform application of the codes.

The signals are becoming clearer. With the full-throated engagement of the HHS Secretary, work by several HHS agencies and predictive properties that ICD-10 provides, new opportunities exist for CMS to further refine its signal. This will allow more reflective programs and payment systems in support of vulnerable patients — the patients hospitals see every day.

Herb Kuhn is the president and CEO of the Missouri Hospital Association.

from THCB http://bit.ly/2RbinEO

Increased Payer and Provider Support May Drive Billions of Dollars in Savings from Biosimilars

By SHEILA FRAME Sheila Frame, biosimilars

FDA Commissioner Scott Gottlieb has said biosimilars are “key to promoting access and reducing health care costs. And it’s a key to advancing public health.” While the Administration works to reduce barriers to bringing biosimilars to market, payers and providers can help increase adoption of biosimilars in clinical practice and ensure cost savings.

Organizations such as the American College of Rheumatology and the American Society of Clinical Oncology have issued educational documents to help guide providers in incorporating biosimilars into treatment plans, where appropriate. Yet, many doctors remain hesitant to prescribe them due to concerns about safety, efficacy, immunogenicity, effects of switching to a new biosimilar and the economic value to patients.

Biosimilars are developed in a similar way as existing biologics and have the same safety, efficacy, and quality profiles, but are more competitively priced to ensure more patients have access to these important medicines and that the system can afford them. A ten-year growing body of real-world use in the EU shows biosimilar medicines increase usage of biologic medicines, while matching their reference biologics in terms of safety, efficacy and quality.

Switching between a biologic and a biosimilar has also proven to be safe in several studies. New data recently presented at the American College of Rheumatology Annual Meeting further reinforces the safety profile of biosimilars. In addition, a review of 90 studies that enrolled 14,225 unique individuals found that a great majority of the studies had unchanged risk of immunogenicity-related safety concerns or diminished efficacy after switching from a reference biologic to a biosimilar medicine. No new safety or efficacy concerns have been detected in more than 10 years and 700 million patient days of experience with biosimilar medicines.

Contracting arrangements with reference product manufacturers that limit competition and step edit policies that do not make sense for two products that have the same safety, efficacy and quality profiles, are making it difficult for patients to access these important medicines. By adopting utilization management controls and benefit design changes that favor biosimilars, payers can play an important role in increasing adoption of biosimilars, increasing patient access and lowering costs for the overburdened U.S. health system.

A 5-year budget impact model looking at a hypothetical health plan with 1 million members being presented at the Academy of Managed Care Pharmacy Annual meeting this week shows that $6 million could be saved by one health plan alone if an etanercept biosimilar was able to enter the marketplace. An early adopter of biosimilars, Yale New Haven Health System reports their switch to Zarxio® will lead to 20 percent in savings, which could equate to $400,000 per year –real world proof that biosimilars can make a big difference. According to a RAND Corps report released last year, biosimilars could save the U.S. health system $54 billion over the next decade.

There is clear evidence to support biosimilars and a competitive biologics marketplace. However, biosimilars must be covered, prescribed and used if patients and healthcare systems are to realize their full promise. We believe we have a social responsibility to ensure biosimilar success.

Sheila Frame, Vice President and Head Sandoz Biopharmaceuticals, North America, is responsible for the development and commercialization of the biosimilar medicines portfolio in the US and Canada.

from THCB https://ift.tt/2EDRUtL

The Importance of Patient Engagement in Post-Acute Care

By BRIAN HOLZER MD, MBA Brian Holzer, patient engagement, post-acute care

Leaders in hospitals and health systems as well as post-acute care providers such as skilled nursing facilities (SNFs) and Home Health Care (HHC) agencies operate in a complex environment. Currently, the health care reimbursement environment is largely dominated by fee-for-service models. However, acute and post-acute leaders must increasingly position their organizations to prepare for, and participate in, evolving value-based care programs—without losing sight of the current fee-for-service reimbursement structure.

With that said, the call to action for acute and post-acute providers working at both ends of the reimbursement spectrum is real. The time is now to innovate, test and adopt new post-acute care models to support each patient’s transition from hospital to post-acute settings, and eventually home to enable a better care experience for patients and their care teams.

This is especially relevant for Skilled Nursing Facilities (SNFs) and chains that meet the current Medicare requirements for Part A coverage. Increasingly, the SNF industry is under pressure from the Medicare program to improve coordination and outcomes. Medicare’s hospital readmission policy and value-based purchasing program (VBP), bundled payments, and ACOs encourage SNFs, and other post-acute settings, to avoid readmissions. In addition, earlier this year, the Centers for Medicare and Medicaid Services (CMS) finalized a new patient-driven payment model (PDPM) for SNFs, which will go into effect on October 1, 2019. The overhaul of the entire system will require significant staff focus and operational changes.

Care management solutions will be particularly helpful as SNFs operate under the recently imposed SNF VBP. Specifically, models that assist SNFs in reducing their 30-day hospital readmission measure and better manage performance scores, which may increase the facility’s Medicare incentive payments. This will undoubtedly require investments in internally developed and/or outsourced solutions that engage patients after discharge from the facility for a period of at least 30 days, whether or not the patient is under the care of a home health company. Telephonic patient engagement, in particular, with clinically trained resources during this 30-day period of time can serve to efficiently:

  • Screen for fall risk and depression, and identify common gaps including concerns related to medications, home medical equipment, physician appointments, and necessary medical services such as home health
  • Assess for site of care needs with coordination for patients who may be better served back in a SNF for a shorter therapy stay, which does not require another three-day hospital qualifying stay
  • Assess for site of care needs with coordination for patients who may be better served in Home Health Care or assisted living or independent living residences, which can serve as appropriate alternatives to unnecessary emergency room visits or hospital readmissions

There’s an urgent need for additional post-acute care management services supporting holistic patient transitions and operations, and providers need to take actions now or run the risk that the decisions will be made for them. The time is now for acute and post-acute care providers to consider solutions that help deliver effective care management and improve patient engagement across the post-acute care continuum.

Brian Holzer is a senior physician executive with diverse experiences including strategy, operations, marketing and sales in large and small public and private healthcare companies.

from THCB https://ift.tt/2LoWmNr