YTH Live 2020

By ERIN MCKELLE

There are many public health
conferences that focus on young people, or that center around youth issues, but
very few that actually include the young people’s voices that we are claiming
to uplift as public health professionals.

There are also very few conferences
that emphasize innovation in healthcare, that are pointed towards solutions
rather than discussing problems at length without clear ways of solving them.

These core issues are at the heart of the annual YTH Live conference. Each year (we’re on our twelfth!), we showcase the boldest technologies in health and cutting-edge research in all facets of youth health and wellness. We also have attendees that range from IT professionals to high school students, with over 25% of last year’s attendees and speakers being young people themselves.

YTH’s Communications Coordinator
Erin McKelle has first-hand experience of this. “I first attended YTH Live when
I was a senior in high school. It was the first conference I ever spoke at and
all of my fears about being the only young person in the room were quickly put
to rest, once I saw that YTH plans a youth conference that actually centers
around youth voices,” she says. “I’m proud to now be working for the
organization years later, after serving on the Youth Advisory Board, paying the
mission of youth empowerment forward to the next generation of youth leaders.”

YTH Live 2020 will focus on the
overall health and wellness of youth in the US and around the world, seeking to
understand how innovative technology can be leveraged to improve health
outcomes for all and promote health equity. Whether you are an Executive
Director, developer, or youth advocate, YTH Live can help you learn about the
latest trends in health, innovation, and technology, as well as facilitate
connecting, as we host a networking event each year for our 450+ attendees to
find new partnerships, make new contacts, and share their work with like-minded
professionals in a more focused setting.

If you have an innovative health
technology you’d like to share, interesting research, or a project you’d like
to signal-boost, we invite you to submit a proposal in our open call for
abstracts for next year’s conference. As for tips on what to submit, McKelle
also offers words of wisdom. “We really look for ideas that are centered in
innovation, pushing the envelope on what is normally done in health and
wellness,” Erin advices. “Topically, the Program Committee will be evaluating
proposals that hit all three areas of youth-centered design or impact, health,
and technology,” she explains.

You have until Wednesday, November 6 at 11:59pm PST to submit your proposal, which will then be reviewed by our Program Committee. Learn more about YTH Live and submit your abstract by the 6th! We look forward to seeing your ideas at YTH Live 2020.

Erin McKelle serves as Communications Coordinator at ETR for the YTH initiative.

from The Health Care Blog https://ift.tt/2pq5f2L

Health in 2 Point 00, Episode 99 | (Reverse) Takeover Edition with Bayer G4A

Today on Health in 2 Point 00… hold on, where’s Jess? On Episode 99, I do a reverse takeover with Priyanka Kashyap and Sophie Park at Bayer’s office in Berlin. Priyanka tells us about what Bayer G4A is doing these days with the 5 startups in their Advance Track: Blackford Analysis in radiology; Carepay and RelianceHMO improving affordability and access for patients in Africa; NeuroTracker, which is in the neuro space but is working with the oncology team at Bayer; and Prevencio, a diagnostic solution in the cardiovascular space. Sophie also gives us a rundown of the 6 startups in the Growth Track at G4A: Wellthy, a digital therapeutics company out of India; Litesprite, for mental health; BioLum, a pulmonology startup working on detecting nitric oxide levels in the blood; Upside Health with its chronic pain management software; and finally Visotec and Oxxo Health in ophthalmology. —Matthew Holt

from The Health Care Blog https://ift.tt/36fGy9Q

ACCESS Act Points the Way to a Post-HIPAA World

By ADRIAN GROPPER, MD

The Oct. 22 announcement starts with: “U.S. Sens. Mark R. Warner (D-VA), Josh Hawley (R-MO) and Richard Blumenthal (D-CT) will introduce the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act, bipartisan legislation that will encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.”

Although the scope of this bill is limited to the largest of the data brokers (messaging, multimedia sharing, and social networking) that currently mediate between us as individuals, it contains groundbreaking provisions for delegation by users that is a road map to privacy regulations in general for the 21st Century.

The bill’s Section 5: Delegation describes a new right for us as data subjects at the mercy of the institutions we are effectively forced to use. This is the right to choose and delegate authority to a third-party agent that can manage interactions with the institutions on our behalf. The third-party agent can be anyone we choose subject to their registration with the Federal Trade Commission. This right to digital representation by an entity of our choice with access to the full range of our direct control capabilities is unprecedented, as far as I know.

The problem with HIPAA, and with Europe’s General Data Protection Regulation (GDPR) is a lack of agency for the individual data subject. These regulatory approaches presume that all of the technology is controlled by our service providers and none of the technology is controlled by the us as data subjects. There are major limitations to this approach. 

First, it depends on regulation and bureaucracy around data uses (“notice and consent”) which typically lag the torrid pace of tech and business innovation. The alternative of mandating the technical ability to delegate, per this bill, reduces the scope of necessary regulation while still allowing the service providers to innovate.

Second, a right to delegate control gives the data subject a lot more market power in highly concentrated markets like communications or hospital networks where effective and differentiated competition is scarce. A patient, for example, will have a choice among hundreds of digital representatives even when that patient is in a market served by only one or two hospital networks. These digital representatives will compete on a national scale even as our provider choices are limited by geography or employment.

Third, the advent of patient-controlled technology enabled by mandated delegation means that machine learning, artificial intelligence, and expertise in general, can now move closer to patient. For example, patient groups that share a serious disease can organize as a cooperative to make the best use of their health records and hire expert physicians and engineers to design and operate the delegate.

Fourth, the right to specify a delegate means that, for the first time, our service providers will have to come to us. Under the current practice, patients are forced to navigate different user interfaces, portal designs, privacy statements, and associated dark patterns designed to manipulate us in different ways by each of our service providers. We are forced to figure out the idiosyncrasies of every service provider afresh. A right to delegation means that patients will have a consistent user interface and a more consistent user experience across our service providers even if the delegate is relatively dumb in the expert systems sense. 

Anyone who has sought the services of an attorney or a direct primary care physician understands the value of an expert fiduciary that is more-or-less substitutable if they fail to satisfy. These learned intermediaries are understood as essential when we face asymmetries of power relative to a court or hospital. The ACCESS Bill is a breakthrough because it extends our right to choose a delegate to the digital institutions that are now deeply embedded in our lives.

Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. This post first appeared in Bill of Health here.

from The Health Care Blog https://ift.tt/2Nt617j

Another MCQ Test on the USMLE

By BRYAN CARMODY, MD

One of the most fun things about the USMLE pass/fail debate is that it’s accessible to everyone. Some controversies in medicine are discussed only by the initiated few – but if we’re talking USMLE, everyone can participate.

Simultaneously, one of the most frustrating things about the USMLE pass/fail debate is that everyone’s an expert. See, everyone in medicine has experience with the exam, and on the basis of that, we all think that we know everything there is to know about it.

Unfortunately, there’s a lot of misinformation out there – especially when we’re talking about Step 1 score interpretation. In fact, some of the loudest voices in this debate are the most likely to repeat misconceptions and outright untruths.

Hey, I’m not pointing fingers. Six months ago, I thought I knew all that I needed to know about the USMLE, too – just because I’d taken the exams in the past.

But I’ve learned a lot about the USMLE since then, and in the interest of helping you interpret Step 1 scores in an evidence-based manner, I’d like to share some of that with you here.

However…

If you think I’m just going to freely give up this information, you’re sorely mistaken. Just as I’ve done in the pastI’m going to make you work for it, one USMLE-style multiple choice question at a time._

Question 1

A 25 year old medical student takes USMLE Step 1. She scores a 240, and fears that this score will be insufficient to match at her preferred residency program. Because examinees who pass the test are not allowed to retake the examination, she constructs a time machine; travels back in time; and retakes Step 1 without any additional study or preparation.

Which of the following represents the 95% confidence interval for the examinee’s repeat score, assuming the repeat test has different questions but covers similar content?

A) 239-241

B) 237-243

C) 234-246

D) 228-252

_

The correct answer is D, 228-252.

No estimate is perfectly precise. But that’s what the USMLE (or any other test) gives us: a point estimate of the test-taker’s true knowledge.

So how precise is that estimate? That is, if we let an examinee take the test over and over, how closely would the scores cluster?

To answer that question, we need to know the standard error of measurement (SEM) for the test.

The SEM is a function of both the standard deviation and reliability of the test, and represents how much an individual examinee’s observed score might vary if he or she took the test repeatedly using different questions covering similar material.

So what’s the SEM for Step 1? According to the USMLE’s Score Interpretation Guidelines, the SEM for the USMLE is 6 points.

Around 68% of scores will fall +/- 1 SEM, and around 95% of scores fall within +/- 2 SEM. Thus, if we accept the student’s original Step 1 score as our best estimate of her true knowledge, then we’d expect a repeat score to fall between 234 and 246 around two-thirds of the time. And 95% of the time, her score would fall between 228 and 252.

Think about that range for a moment.

The +/- 1 SEM range is 12 points; the +/- 2 SEM range is 24 points. Even if you believe that Step 1 tests meaningful information that is necessary for successful participation in a selective residency program, how many people are getting screened out of those programs by random chance alone?

(To their credit, the NBME began reporting a confidence interval to examinees with the 2019 update to the USMLE score report.)

Learning Objective: Step 1 scores are not perfectly precise measures of knowledge – and that imprecision should be considered when interpreting their values.

__

Question 2

A 46 year old program director seeks to recruit only residents of the highest caliber for a selective residency training program. To accomplish this, he reviews the USMLE Step 1 scores of three pairs of applicants, shown below.

  1. 230 vs. 235
  2. 232 vs. 242
  3. 234 vs. 249

For how many of these candidate pairs can the program director conclude that there is a statistical difference in knowledge between the applicants?

A) Pairs 1, 2, and 3

B) Pairs 2 and 3

C) Pair 3 only

D) None of the above

The correct answer is D, none of the above.

As we learned in Question 1, Step 1 scores are not perfectly precise. In a mathematical sense, an individual’s Step 1 score on a given day represents just one sampling from the distribution centered around their true mean score (if the test were taken repeatedly).

So how far apart do two individual samples have to be for us to confidently conclude that they came from distributions with different means? In other words, how far apart do two candidates’ Step 1 scores have to be for us to know that there is really a significant difference between the knowledge of each?

We can answer this by using the standard error of difference (SED). When the two samples are >/= 2 SED apart, then we can be confident that there is a statistical difference between those samples.

So what’s the SED for Step 1? Again, according to the USMLE’s statisticians, it’s 8 points.

That means that, for us to have 95% confidence that two candidates really have a difference in knowledge, their Step 1 scores must be 16 or more points apart.

Now, is that how you hear people talking about Step 1 scores in real life? I don’t think so. I frequently hear people discussing how a 5-10 point difference in scores is a major difference that totally determines success or failure within a program or specialty.

And you know what? Mathematics aside, they’re not wrong. Because when programs use rigid cutoffs for screening, only the point estimate matters – not the confidence interval. If your dream program has a cutoff score of 235, and you show up with a 220 or a 225, your score might not be statistically different – but your dream is over.

Learning Objective: To confidently conclude that two students’ Step 1 scores really reflect a difference in knowledge, they must be >/= 16 points apart.

__

Question 3

A physician took USMLE Step 1 in 1994, and passed with a score of 225. Now he serves as program director for a selective residency program, where he routinely screens out applicants with scores lower than 230. When asked about his own Step 1 score, he explains that today’s USMLE are “inflated” from those 25 years ago, and if he took the test today, his score would be much higher.

Assuming that neither the test’s content nor the physician’s knowledge had changed since 1994, which of the following is the most likely score the physician would attain if he took Step 1 in 2019?

A) 205

B) 225

C) 245

D) 265

The correct answer is B, 225.

Sigh.

I hear this kind of claim all the time on Twitter. So once and for all, let’s separate fact from fiction.

FACT: Step 1 scores for U.S. medical students score are rising.

See the graphic below.

FICTION: The rise in scores reflects a change in the test or the way it’s scored.

See, the USMLE has never undergone a “recentering” like the old SAT did. Students score higher on Step 1 today than they did 25 years ago because students today answer more questions correctly than those 25 years ago.

Why? Because Step 1 scores matter more now than they used to. Accordingly, students spend more time in dedicated test prep (using more efficient studying resources) than they did back in the day. The net result? The bell curve of Step 1 curves shifts a little farther to the right each year.

Just how far the distribution has already shifted is impressive.

When the USMLE began in the early 1990s, a score of 200 was a perfectly respectable score. Matter of fact, it put you exactly at the mean for U.S. medical students.

Know what a score of 200 gets you today?

A score in the 9th percentile, and screened out of almost any residency program that uses cut scores. (And nearly two-thirds of all programs do.)

So the program director in the vignette above did pretty well for himself by scoring a 225 twenty-five years ago. A score that high (1.25 standard deviations above the mean) would have placed him around the 90th percentile for U.S. students. To hit the same percentile today, he’d need to drop a 255.

Now, can you make the argument that the type of student who scored in the 90th percentile in the past would score in the 90th percentile today? Sure. He might – but not without devoting a lot more time to test prep.

As I’ve discussed in the past, this is one of my biggest concerns with Step 1 Mania. Students are trapped in an arms race with no logical end, competing to distinguish themselves on the metric we’ve told them matters. They spend more and more time learning basic science that’s less and less clinically relevant, all at at the expense (if not outright exclusion) of material that might actually benefit them in their future careers.

(If you’re not concerned about the rising temperature in the Step 1 frog pot, just sit tight for a few years. The mean Step 1 score is rising at around 0.9 points per year. Just come on back in a while once things get hot enough for you.)

Learning Objective: Step 1 scores are rising – not because of a change in test scoring, but because of honest-to-God higher performance.

_

Question 4

Two medical students take USMLE Step 1. One scores a 220 and is screened out of his preferred residency program. The other scores a 250 and is invited for an interview.

Which of the following represents the most likely absolute difference in correctly-answered test items for this pair of examinees?

A) 5

B) 30

C) 60

D) 110

_

The correct answer is B, 30.

How many questions do you have to answer correctly to pass USMLE Step 1? What percentage do you have to get right to score a 250, or a 270? We don’t know.

See, the NBME does not disclose how it arrives at a three digit score. And I don’t have any inside information on this subject. But we can use logic and common sense to shed some light on the general processes and data involved and arrive at a pretty good guess.

First, we need to briefly review how the minimum passing score for the USMLE is set, using a modified Angoff procedure.

The Angoff procedure involves presenting items on the test to subject matter experts (SMEs). The SMEs review each question item and predict what percentage of minimally competent examinees would answer the question correctly.

Here’s an example of what Angoff data look like (the slide is from a recent lecture).

As you can see, Judge A suspected that 59% of minimally competent candidates – the bare minimum we could tolerate being gainfully engaged in the practice of medicine – would answer Item 1 correctly. Judge B thought 52% of the same group would get it right, and so on.

Now, here’s the thing about the version of the Angoff procedure used to set the USMLE’s passing standard. Judges don’t just blurt out a guess off the top of their head and call it a day. They get to review data regarding real-life examinee performance, and are permitted to use that to adjust their initial probabilities.

Here’s an example of the performance data that USMLE subject matter experts receive. This graphic shows that test-takers who were in the bottom 10% of overall USMLE scores answered a particular item correctly 63% of the time.

(As a sidenote, when judges are shown data on actual examinee performance, their predictions shift toward the data they’ve been shown. In theory, that’s a good thing. But multiple studies – including one done by the NBME – show that judges change their original probabilities even when they’re given totally fictitious data on examinee performance.)

For the moment, let’s accept the modified Angoff procedure as being valid. Because if we do, it gives us the number we need to set the minimum passing score. All we have to do is calculate the mean of all the probabilities assigned for that group of items by the subject matter experts.

In the slide above, the mean probability that a minimally competent examinee would correctly answer these 10 items was 0.653 (red box). In other words, if you took this 10 question test, you’d need to score better than 65% (i.e., 7 items correct) to pass.

And if we wanted to assign scores to examinees who performed better than the passing standard, we could. But, we’ll only have 3 questions with which to do it, since we used 7 of the 10 questions to define the minimally competent candidate.

So how many questions do we have to assign scores to examinees who pass USMLE Step 1?

Well, Step 1 includes 7 sections with up to 40 questions in each. So there are a maximum of 280 questions on the exam.

However, around 10% of these are “experimental” items. These questions do not count toward the examinee’s score – they’re on the test to generate performance data (like Figure 1 above) to present in the future to subject matter experts. Once these items have been “Angoffed”, they will become scored items on future Step 1 tests, and a new wave of experimental items will be introduced.

If we take away the 10% of items that are experimental, then we have at most 252 questions to score.

How many of these questions must be answered correctly to pass? Here, we have to use common sense to make a ballpark estimate.

After all, a candidate with no medical knowledge who just guessed answers at random might get 25% of the questions correct. Intuitively, it seems like the lower bound of knowledge to be licensed as a physician has to be north of 50% of items, right?

At the same time, we know that the USMLE doesn’t include very many creampuff questions that everyone gets right. Those questions provide no discriminatory value. Actually, I’d wager that most Step 1 questions have performance data that looks very similar to Figure 1 above (which was taken from an NBME paper).

A question like the one shown – which 82% of examinees answered correctly – has a nice spread of performance across the deciles of exam performance, ranging from 63% among low performers to 95% of high performers. That’s a question with useful discrimination for an exam like the USMLE.

Still, anyone who’s taken Step 1 knows that some questions will be much harder, and that fewer than 82% of examinees will answer correctly. If we conservatively assume that there are only a few of these “hard questions” on the exam, then we might estimate that the average Step 1 taker is probably getting around ~75% of questions right. (It’s hard to make a convincing argument that the average examinee could possibly be scoring much higher. And in fact, one of few studies that mentions this issue actually reports that the mean item difficulty was 76%.)

The minimum passing standard has to be lower than the average performance – so let’s ballpark that to be around 65%. (Bear in mind, this is just an estimate – and I think, a reasonably conservative one. But you can run the calculations with lower or higher percentages if you want. The final numbers I show below won’t be that much different than yours unless you use numbers that are implausible.)

Everyone still with me? Great.

Now, if a minimally competent examinee has to answer 65% of questions right to pass, then we have only 35% the of the ~252 scorable questions available to assign scores among all of the examinees with more than minimal competence.

In other words, we’re left with somewhere ~85 questions to help us assign scores in the passing range.

The current minimum passing score for Step 1 is 194. And while the maximum score is 300 in theory, the real world distribution goes up to around 275.

Think about that. We have ~85 questions to determine scores over around an 81 point range. That’s approximately one point per question.

Folks, this is what drives #Step1Mania.

Note, however, that the majority of Step 1 scores for U.S./Canadian students fall across a thirty point range from 220 to 250.

That means that, despite the power we give to USMLE Step 1 in residency selection, the absolute performance for most applicants is similar. In terms of raw number of questions answered, most U.S. medical student differ by fewer than 30 correctly-answered multiple choice questions. That’s around 10% of a seven hour, 280 question test administered on a single day.

And what important topics might those 30 questions test? Well, I’ve discussed that in the past.

Learning Objective: In terms of raw performance, most U.S. medical students likely differ by 30 or fewer correctly-answered questions on USMLE Step 1 (~10% of a 280 question test).

__

Question 5

A U.S. medical student takes USMLE Step 1. Her score is 191. Because the passing score is 194, she cannot seek licensure.

Which of the following reflects the probability that this examinee will pass the test if she takes it again?

A) 0%

B) 32%

C) 64%

D) 96%

The correct answer is C, 64%.

In 201696% of first-time test takers from U.S. allopathic medical schools passed Step 1. For those who repeated the test, the pass rate was 64%. What that means is that >98% of U.S. allopathic medical students ultimately pass the exam.

I bring this up to highlight again how the Step 1 score is an estimate of knowledge at a specific point in time. And yet, we often treat Step 1 scores as if they are an immutable personality characteristic – a medical IQ, stamped on our foreheads for posterity.

But medical knowledge changes over time. I took Step 1 in 2005. If I took the test today, I would absolutely score lower than I did back then. I might even fail the test altogether.

But here’s the thing: which version of me would you want caring for your child? The 2005 version or the 2019 version?

The more I’ve thought about it, the stranger it seems that we even use this test for licensure (let alone residency selection). After all, if our goal is to evaluate competency for medical practice, shouldn’t a doctor in practice be able to pass the exam? I mean, if we gave a test of basketball competency to an NBA veteran, wouldn’t he do better than a player just starting his career? If we gave a test of musical competency to a concert pianist with a decade of professional experience, shouldn’t she score higher than a novice?

If we accept that the facts tested on Step 1 are essential for the safe and effective practice of medicine, is there really a practical difference between an examinee who doesn’t know these facts initially and one who knew them once but forgets them over time? If the exam truly tests competency, aren’t both of these examinees equally incompetent?

We have made the Step 1 score into the biggest false god in medical education.

By itself, Step 1 is neither good nor bad. It’s just a multiple choice test of medically-oriented basic science facts. It measures something – and if we appropriately interpret the measurement in context with the test’s content and limitations, it may provide some useful information, just like any other test might.

It’s our idolatry of the test that is harmful. We pretend that the test measures things that it doesn’t – because it makes life easier to do so. After all, it’s hard to thin a giant pile of residency applications with nuance and confidence intervals. An applicant with a 235 may be no better (or even, no different) than an applicant with a 230 – but by God, a 235 is higher.

It’s well beyond time to critically appraise this kind of idol worship. Whether you support a pass/fail Step 1 or not, let’s at least commit to sensible use of psychometric instruments.

Learning Objective: A Step 1 score is a measurement of knowledge at a specific point in time. But knowledge changes over time.

_

Score Report

So how’d you do?

I realize that some readers may support a pass/fail Step 1, while others may want to maintain a scored test. So to be sure everyone receives results of this test in their preferred format, I made a score report for both groups.

_

NUMERIC SCORE

Just like the real test, each question above is worth 1 point. And while some of you may say it’s non-evidence based, this is my test, and I say that one point differences in performance allow me to make broad and sweeping categorizations about you.

1 POINT – UNMATCHED

But thanks for playing. Good luck in the SOAP!

2 POINTS – ELIGIBLE FOR LICENSURE

Nice job. You’ve got what it takes to be licensed. (Or at least, you did on a particular day.)

3 POINTS – INTERVIEW OFFER!

Sure, the content of these questions may have essentially nothing to do with your chosen discipline, but your solid performance got your foot in the door. Good work.

4 POINTS – HUSAIN SATTAR, M.D.

You’re not just a high scorer – you’re a hero and a legend.

5 POINTS – NBME EXECUTIVE

Wow! You’re a USMLE expert. You should celebrate your outstanding performance with some $45 tequila shots while dancing at eye-level with the city skyline.

_

PASS/FAIL

FAIL

You regard USMLE Step 1 scores with a kind of magical thinking. They are not simply a one-time point estimate of basic science knowledge, or a tool that can somewhat usefully be applied to thin a pile of residency applications. Nay, they are a robust and reproducible glimpse into the very being of a physician, a perfectly predictive vocational aptitude test that is beyond reproach or criticism.

PASS

You realize that, whatever Step 1 measures, it is a rather imprecise in measuring that thing. You further appreciate that, when Step 1 scores are used for whatever purpose, there are certain practical and theoretical limitations on their utility. You understand – in real terms – what a Step 1 score really means.

(I only hope that the pass rate for this exam is as high as the real Step 1 pass rate.)

Dr. Carmody is a pediatric nephrologist and medical educator at Eastern Virginia Medical School. This post originally appeared on The Sheriff of Sodium here.

from The Health Care Blog https://ift.tt/2po6UG6

Climate Change is not an ‘Equal Opportunity’ Crisis

Sam Aptekar
Phuoc Le

By PHUOC LE, MD and SAM APTEKAR

In the last fifteen years, we have witnessed dozens of natural disasters affecting our most vulnerable patients, from post-hurricane victims in Haiti to drought and famine refugees in Malawi. The vast majority of these patients suffered from acute on chronic disasters, culminating in life-threatening medical illnesses. Yet, during the course of providing clinical care and comfort, we rarely, if ever, pointed to climate change as the root cause of their conditions. The evidence for climate change is not new, but the movement for climate justice is now emerging on a large scale, and clinicians should play an active role.

Let’s be clear: there is no such thing as an “equal opportunity”
disaster. Yes, climate change poses an existential threat to us all, but not on
equal terms. When nature strikes, it has always been the poor and historically
underserved who are most vulnerable to its wrath. Hurricane Katrina provides an
example of how natural disasters target their victims along racial and
socioeconomic lines even in the wealthiest nations. Writes TalkPoverty.org, “A black homeowner in New Orleans was more than three times as
likely to have been flooded as a white homeowner. That wasn’t due to bad luck;
because of racially discriminatory housing practices, the high-ground was taken
by the time banks started loaning money to African Americans who wanted to buy
a home.” Throughout the world, historically marginalized communities have been
pushed to overcrowded, poorly-built, and unsanitary neighborhoods where natural
disasters invoke much greater harm.

Photo from video on Democracy Now! article: “New Orleans After Katrina: Inequality Soars as Poor Continue to Be Left Behind in City’s ‘Recovery”

The poor also tend to work more physically demanding jobs that
become particularly dangerous with rising temperatures. Scientific American
reported more than 20,000 workers dying in Central America and southern Mexico from
chronic kidney disease caused by extreme temperatures and unreasonable
employment conditions. According to the World Health Organization, climate
change is expected to cause 250,000 additional deaths per year from diarrhea, malnutrition,
malaria, and stress.

Figure from May 2019 Somalia Humanitarian Bulletin (OCHA)

Moreover, resource-denied countries have the greatest economic
reliance on agriculture, which is by far the most vulnerable industry to
anthropogenic weather changes. Throughout the Horn of Africa, droughts have
been recorded at historically intense levels (the 2016-17 rains in Somalia are
the driest on record) and have destroyed the economic sustenance of
millions of farmers. According to Oxfam,
“The region was hit by an 18-month drought caused by El Niño and higher
temperatures linked to climate change.” They estimate that 10.7 million people
currently face severe hunger throughout Ethiopia, Kenya, Somalia, and
Somaliland as their crops and cattle die. With resource-denied countries such
as these relying so strongly on agriculture to keep their economies afloat, the
World Bank reported that climate change has the ability to send more than 100 million people into
poverty by 2030. More than 23.3 million people are already in need of humanitarian aid in the Horn
of Africa.

Volunteers in Freeport, Grand Bahama, Bahamas rescuing families during Hurricane Dorian (AP Photo/Ramon Espinosa)

Climate change has
already made certain regions of the world uninhabitable and threatens the
sociopolitical stability of numerous others. According to the Internal
Displacement Monitoring Center, there were 18.8 million climate-related displacements in 2017 alone. The Syrian
Civil War, which left millions in search of a new home and catalyzed political
instability throughout the region, holds climate change as one of its many contributing factors.

Unfortunately, these patterns
show no signs of slowing down. Globally, the number of weather-related
disasters has tripled since the 1960s.
In September, Hurricane Dorian battered the Bahamas and left a humanitarian crisis in
its wake; thousands are homeless, without food, water, and electricity as the
islands remain flooded. This was just two years after Hurricane Maria destroyed
thousands of homes in Puerto Rico and only one year after Hurricane Matthew
killed 49 people
and caused $10.8 billion in damage in North Carolina. The United States Geological
Survey reports, “With increasing global surface temperatures the possibility
of more droughts and increased intensity of storms will occur.”

The classification of hurricanes, tornadoes, and droughts as
“natural” disasters suggests their origin are separate from human behavior,
that they exist purely in the realm of nature where man has no influence. But
if we look at the destruction they have caused historically, we see that their
effects are almost completely determined by human action, specifically our
social, economic, and political policies that continue to leave some more
vulnerable than others. While the Silicon Valley dreams of future technological
solutions to climate change, there are social policies that we, as healthcare
professionals, can address right now.

Climate change is a public health emergency, and as guardians of
the public’s health, it is our role as healthcare professionals to continuously
stress the magnitude of the situation. We must assert with medical expertise
that as “natural” disasters intensify and transform entire ecosystems, the poor
and historically underserved have been, and will continue to be, the hardest
hit. By providing honest, evidence-driven accounts of climate change and its
health consequences, healthcare professionals can elevate the voices of
millions who are left out of most contemporary climate movements and bring
their stories to the fore as we continue to fight climate change together.

Internist, Pediatrician, and Associate
Professor at UCSF, Dr. Le is also the co-founder of two health equity
organizations, the HEAL Initiative and Arc Health. 

Sam Aptekar is a recent graduate of UC Berkeley and a current
content marketing and blogging affiliate for Arc Health Justice.

This post originally appeared on Arc Health here.

from The Health Care Blog https://ift.tt/34cyZ1Z

Leveraging Time by Doing Less in Each Chronic Care Visit

By HANS DUVEFELT, MD

So many primary care patients have several multifaceted problems these days, and the more or less unspoken expectation is that we must touch on everything in every visit. I often do the opposite.

It’s not that I don’t pack a lot into each visit. I do, but I tend to go deep on one topic, instead of just a few minutes or maybe even moments each on weight, blood sugar, blood pressure, lipids, symptoms and health maintenance.

When patients are doing well, that broad overview is perhaps all that needs to be done, but when the overview reveals several problem areas, I don’t try to cover them all. I “chunk it down”, and I work with my patient to set priorities.

What non-clinicians don’t seem to think of is that primary health care is a relationship based care delivery that takes place over a continuum that may span many years, or if we are fortunate enough, decades.

Whether you are treating patients, coaching athletes, raising children or housebreaking puppies, the most effective way to bring about change is just about always incremental. We need to keep that in mind in our daily clinic work. Small steps, small successes create positive feedback loops, cement relationships and pave the way for bigger subsequent accomplishments.

Sometimes I avoid the biggest “problem” and work with patients to identify and improve a smaller, more manageable one just to create some positive momentum. That may seem like an inefficient use of time, but it can be a way of creating leverage for greater change in the next visit.

I actually think the healthcare culture has become counterintuitive and counterproductive in many ways; it helps me when I focus intensely on the patient in front of me, forgetting my list of “shoulds” (target values, health maintenance reminders and all of that) and first laying the foundation for greater accomplishments with less effort in the long run.

Six months ago I wrote this about how I try to start each patient visit. And in my Christmas reflection seven years ago I wrote about the moment when a physician prepares to enter an exam room:

I have three fellow human beings to interact with and offer some sort of healing to in three very brief visits. Three times I pause at the doorway before entering my exam room, the space temporarily occupied by someone who has come for my assessment or advice. Three times I summarize to myself what I know before clearing my mind and opening myself up to what I may not know or understand with my intellect alone. Three times I quietly invoke the source of my calling.

It’s all about the patient, the flesh and blood one in front if you in that very moment and what he or she needs most from us today. In physics I learned that you get better leverage when your force is applied a greater distance from the fulcrum. In human relationships and in medicine it is the opposite; the closer you are, the greater leverage you achieve.

Hans Duvefelt is a Swedish-born rural Family Physician in Maine. This post originally appeared on his blog, A Country Doctor Writes, here.

from The Health Care Blog https://ift.tt/31Seo1q

Improving the Affordable Care Act Markets (Part 2)

By JONATHAN HALVORSON

In a previous post, I described how some features of the Affordable
Care Act, despite the best intentions, have made it harder or even impossible
for many plans to compete against dominant players in the individual and small
employer markets. This has undermined aspects of the ACA designed to improve
competition, like the insurance exchanges, and exacerbated a long
term trend
toward consolidation and reduced choice, and there is evidence it
is resulting in higher costs. I focused on the ACA’s risk adjustment program
and its impact on the small group market where the damage has been greatest.

The goal of risk adjustment is commendable: to create
stability and fairness by removing the ability of plans to profit by “cherry
picking” healthier enrollees, so that plans instead compete on innovative
services, disease management, administrative efficiency, and customer support.
But in the attempt to find stability, the playing field was tilted in favor of
plans with long-tenured enrollment and sophisticated operations to identify all
scorable health risks. The next generation of risk adjustment should truly even
out the playing field by retaining the current program’s elimination of an
incentive to avoid the sick, while also eliminating its bias towards incumbency
and other unintended effects.

One important distinction concerns when to use risk
adjustment to balance out differences that arise from consumer preferences. For
example, high deductible plans tend to attract healthier enrollees, and without
risk adjustment these plans would become even cheaper than they already are,
while more comprehensive plans that attract sicker members would get
disproportionately more expensive, setting off a race to the bottom that pushes
more and more people into the plans that have the least benefits, while the
sickest stay behind in more generous plans whose premium cost spirals upward. Using
risk adjustment to counteract this effect has been widely beneficial in the
individual market, along with other features like community rating and
guaranteed issue.

However, in other cases where risk levels between plans differ
due to consumer preferences it may not be helpful. For example, it has been
documented that older and sicker members have a greater aversion to change (changing
plans to something less familiar) and to constraints intended to lower cost
even if they do not undermine benefit levels or quality of care, like narrow networks.
These aversions tend to make newer plans and small network plans score as
healthier. Risk adjustment would then force those plans to pay a penalty that in
turn forces enrollees in the plans to pay for the preferences of others.

If new plans, or innovations to control costs and avoid
over-utilization such as narrow networks and tiered pharmacies were not in the
public interest, then those penalties could be justified. But since the ACA
relies on contracting strategies available to private plans (not rate setting)
to control costs, penalizing such innovations through risk adjustment will
undermine competition and efforts at cost control. These constraints may also
actually improve the quality of care and quality of life by avoiding
unnecessary care and the complications it can create, and better coordinating
care among more integrated providers. There is evidence in Massachusetts and
elsewhere that risk adjustment penalties due to these preferences harm the public
interest
.

A small insurer that contracts with the same providers as a
large one is going to pay more for each service provided, especially at
hospital systems and medical groups that have a strong position when negotiating
rates. A careful analysis
of the New York market found that this was likely the single largest driver of
losses among start-up health plans, but the risk adjustment model makes no
accommodation for it. The only way for a smaller plan to partially counteract
this large plan cost advantage is to have a more narrow network and form a
close alliance with a limited number of provider partners. In doing so, that
insurer will exclude some important institutions in the local area, and people
who are least amendable to this tend to be older and sicker. The risk
adjustment model makes no allowance for this selection preference.

The increased willingness of younger and heathier people (and
small businesses) to try something new could help mitigate the benefits of size
and incumbency, but the current risk adjustment model throws up roadblocks to
this opening for innovation.

The measurement methodology itself also creates issues. As
mentioned in my previous post, inevitably a dominant insurer will pull the
average statewide risk score to be closer to its level than competitors, and
what can seem like a small payment to it is huge to its competitors as a share
of their premium. The largest incumbent plans also have a lot of historical
data needed to effectively confirm each year what conditions (scorable risks)
its members have, and sophisticated operations to find and confirm new
diagnoses every year, as the ACA requires. This results in plans receiving
higher risk scores than others that do not reflect true underlying differences
in risk.

So, what can be done? Below are three suggestions for
further discussion:

  • Add Value Control Factor – A Value
    Control factor could be established for select innovations that address cost
    and quality but may be selected against by enrollees with higher costs. Such
    factors could include small integrated networks, restriction to in-network
    benefits, reliance on a PCP to coordinate care, and tiered formularies. The
    amount of any risk transfer payments from a plan with eligible cost/quality
    controls could be reduced by this factor, in order to reduce penalties for
    innovative solutions that align incentives more closely between payers and
    providers, avoid waste and inflation, and improve collaboration in care
    management. This factor would be empirically established in each state, and if
    there is no measured impact then no factor would be applied. Plan designs that
    simply increase member cost-sharing would not be eligible for a Value
    Control factor.
  • Provide Relief for Small Plans – If a
    plan in a highly
    concentrated
    insurance market has less than a certain market share (e.g., 5%),
    it should be allowed relief from some of the burden of risk adjustment
    transfers. Plans could be made to opt-in to this program in advance to avoid
    gaming, and market share should be set at the level of the insurer in a
    particular market (namely the individual or small group market) rather than a
    specific insurance product. This relief is necessary to counter some of the
    advantages larger plans typically have from their long-tenured membership,
    favorable contracted rates with providers, and close relationships with brokers
    who steer coverage decisions.

The ACA has done well given the limitations it has faced. Millions more Americans have insurance than before, and those with existing health conditions are freed from the fear of being rejected for coverage simply because they are sick. Even under an administration hostile to it, enrollment has been stable for those receiving subsidies on the exchanges. However, the unsubsidized individual market and the small group market have been shrinking, while the dominant insurers keep getting more dominant. We face the prospect of stagnant marketplaces with shrinking competition, higher premiums and fewer enrollees. HHS has repeatedly invited states to submit their own proposals for alterations to the federal risk adjustment program to suit the conditions of each state, though so far only Alabama (in 2018) and New York (in 2017, as an emergency measure) have done so. It is better late than never for states to take HHS up on its offer.

Jonathan Halvorson is a Senior Healthcare Consultant at Sachs Policy Group and has a long-term interest in the transformative potential of technology on the health care system

from The Health Care Blog https://ift.tt/2MLy6aO

Aussie Series: Health Tech Workforce

By JESSICA DAMASSA, WTF HEALTH

A few weeks ago, WTF Health took the show on the road to Australia’s digital health conference, HIC 2019. We captured more than 30 interviews (!) from the conference, which is run by the Health Informatics Society of Australia (hence the HISA Studio branding) and I had the opportunity to chat with most of the Australian Digital Health Agency’s leadership, many administrators from the country’s largest health systems, and a number of health informaticians, clinicians, and patients. I’ll be spotlighting a few of my favorites here in a four-part series to give you a flavor of what’s happening in health innovation ‘Down Under.’ For much more, check out all the videos on the playlist here

This is the final post in our series, and in it I’m sharing four interviews on the theme of the future of the health tech workforce. This was a huge topic of conversation at HIC19 — dominating the discussion more than at any other conference I’ve been to in the US or Europe — and what struck me was all the different ways Aussies are looking at ‘workforce preparedness.’ 

There’s Kerryn Butler-Henderson, Associate Professor for Digital Health at the University of Tasmania, who is leading a Health Information Workforce Census that will take place in 2020. She’ll be “counting” the health data analysts, healthcare informaticians, health information managers, clinical coders and health librarians (more on what that job does in the interview) in not only Australia, New Zealand, and Tasmania, but also the US, UK, Canada, and Middle East to give us a larger look at the demographics of this part of the industry. A surprising take-away from her previous work in this space? More than 70% of health information workers are over the age of 45, signaling a shortage that could come up pretty quickly if we don’t start doing a better job of recruiting for the field.

Amandeep Hansra, a clinician-turned-entrepreneur, spoke to me about an organization she launched called Creative Careers in Medicine, which gives doctors and other clinicians an “out” from a traditional career in medicine for those who would much rather launch a startup or serve as a CMO in a health tech company. The brand-new org is just 18 months old and has already attracted 5,000 members! What does that say about clinician burnout?! Yikes. 

And, finally, here are interviews from two different professional organizations that are tackling preparedness in two different ways. 

First, there’s Adam Phillips, HISA’s Workforce Director, who was a pharmacist and first-hand witness to the lack of training healthcare organizations have provided to-date to prepare their current employees to adapt to the “digital transformation” of healthcare. Now a pretty outspoken advocate from the ‘front lines’ of healthcare, Adam is leading HISA’s workforce initiatives and has some bold plans.

Meanwhile, Mark Brommeyer of the Certified Health Informatician Australiasia Board (CHIA) talks about the need for certification programs for health informaticians — the diverse group of clinicians, technicians, policy makers, academics and researchers working on the digital transformation of healthcare — in order to keep today’s workforce up-to-speed. 

from The Health Care Blog https://ift.tt/33PNEzM

$2 Trillion+ in New Taxes for Single Payer, or $50 Billion to Strengthen ObamaCare? Next Question, Please

By BOB HERTZ

It is not wise for Democrats to spend all their energy
debating Single Payer health care solutions.

None of their single player 
plans has much chance to pass in 2020, especially under the limited
reconciliation process. In the words of Ezra Klein, “If Democrats don’t have a
plan for the filibuster, they don’t really have a plan for ambitious health
care reform.”

Yet while we debate Single Payer – or, even if it somehow
passed, wait for it to be installed — millions of persons are still hurting
under our current system.

We can help these people now!

Here are six practical programs to create a better ACA.

Taken all together they should not cost more than $50
billion a year. This is a tiny fraction of the new taxes that would be needed
for full single payer. This is at least negotiable, especially if Democrats can
take the White House and the Senate.

Program No. 1 –  Kill the Subsidy Cliff

If your annual income as a single taxpayer is under $48,560,
you currently get a subsidy when purchasing a qualified plan. At age 60, your
subsidy could be as much as $7,000 a year depending on the state. You would be
paying $400 a month for a policy whose full price is $1,024 a month,

But if your annual income is $ 49,000  or more, you get no subsidy whatsoever.

Your premium is the full $1,024 a month –which amounts to
about 30% of your aftertax income, and for what? For a policy with deductibles
and copays that could easily create a $6,000 debt after hospitalization? One
might be better off to stay uninsured, save the $12,000 in premiums, and just
beg the hospital for discounts if you become seriously ill.

The solution is to guarantee that no one, of any income,
has to pay more than 9.5 of their income for insurance. The person earning  $49,000 
would get virtually the same subsidy as the person earning $1.000 less.

This could impact  2
to 3 million persons, many of them over age 50. Right now they are either
getting crushed by unsubsidized premiums, or staying uninsured, or buying risky
short term coverage.

The annual cost of these greater subsidies should be in the range of $8-$15 billion a year. This is equivalent to about one week of Medicare spending.  

Worth noting:

These individual market participants are virtually the
only Americans who do NOT get a subsidy for health insurance. The old get
subsidies, the poor get subsidies, veterans get subsidies, children get
subsidies, and most employees get subsidies. 
Many of these subsidies equal or exceed the $7,000 in our example here.

Program No. 2 – Kill the Family Glitch

More and more employers no longer pay for full family
coverage.  They only subsidize the
premium of the employee. They charge extra (and often a lot extra) to add a
spouse and children.

If the employee’s spouse has a good job of their own, this
may be acceptable to all.

However — if the spouse is a homemaker, or only has part
time work, this can be a huge problem. It may cost $1,000 a month or more to
add a spouse and children to a corporate plan. 

Due to the ‘family glitch” in the ACA, families are not
eligible for premium subsidies in the exchange if the employee
could get employer-sponsored coverage just
for him or herself
, for less than 9.86 percent of the household’s
income 

It doesn’t matter how much the employee would have to pay
to purchase family
coverage. The family members are not eligible for exchange subsidies

The dependents can either pay full price in the individual
market, or pay whatever the employer requires to cover the family on the
employer’s plan, despite both options being financially unrealistic.

A spouse with children and a modest family income should
get that ACA subsidy. 

Somewhere between three to six  million people are impacted by the family
glitch. They are disproportionately middle income, because higher-income
workers are more likely to work for companies that heavily subsidize coverage
for dependents.

 The cost of new
subsidies could be $20 billion a year. 
But note the following:

As recently as the 1950’s, we actually wantedone
parent to stay home with young children. If we 
really believe in family values, we can show it by spending actual
public money on families. In Europe, children’s health insurance is basically
free; an expanded ACA is the least we can do.

Program No. 3 – Improve the ACA policies

We must address the high deductibles and out-of-pocket
limits now found in most ACA insurance contracts.

Here are several reforms we can impose immediately:

A. Emergency care must be exempt from the deductible.
(Co-pays up to $250 are acceptable. Co-insurance for emergency care is not
acceptable.)

B. Drugs must have their own deductible, versus the
overall plan deductible. In other words, drug coverage must start after perhaps
$250  in drug expenses, and not wait
until the full plan  deductible is met.

Here too, co-pays are acceptable but co-insurance is not.

If a person’s drugs cost $10,000 a month and their
co-insurance is 20%, that is not acceptable.

(An appalling 40% of current ACA plans do not have
separate drug deductibles.)

C. Out of pocket maximums must be related to family
incomes. A family maximum of $14,300 is much too high for a $50,000 annual
household income. Their maximum should be no more than $5,000.

D, Out of network care must count toward out of pocket
maximums. The plan deductible must also count against out of pocket maximums.

All these steps will reduce the chance that a person with
insurance will go deeply into debt.

 Granted, such
provisions will raise insurance premiums by about 10%. However, if premiums go
up, subsidies go up.  If subsidies are
universal at all income levels, no one is worse off. The higher subsidies  should result in $7 to $10 billion of extra
federal spending.

This is the most equitable way to improve the quality of
health coverage. We cannot wait for better insurance plans to arise from free
enterprise or competition. The only time 
that insurance companies ever created attractive policies was when they
could impose strict medical underwriting.

(Whereas European nations have had low-deductible,
no-exclusions health insurance  for
decades – primarily by using  price
controls and mandates. )

Program No. 4 – Extend Medicare’s Consumer Protections to All Americans

We might not be able to give Medicare benefits to all, due
to the taxes required.

However, we can extend Medicare’s protections to
all, including

a. Protection from balance billing

In general, providers cannot charge  seniors more than 115% of the approved
Medicare amount. Surprise bills and chargemaster bills simply do not exist in
Medicare.

b. If a Medicare claim is denied – and actually this
happens a lot – the patient is not automatically liable for the bill.

If the patent could not have been expected to know that
the claim might be denied, then they will not owe for the care. The provider
takes the loss.

Even veterans can face brutal debts if their claims are
denied. We need to enforce limited liability for everyone, not just seniors.

PROGRAM NO. 5 – ASSISTANCE WITH MEDICAL DEBTS

 The ACA has shown
that we cannot force, or bribe, or incentivize all Americans to get
comprehensive coverage.

There is no chance that America will establish a free national
health service or free public hospitals.

There is also zero 
chance that all Americans will 
save $10,000 for unforeseen medical costs, or just  to cover their deductibles.  We are not Singapore.

We can however  make
medical debt less common and more manageable, as follows:

1. Outlaw high deductibles for emergency care and
prescription drugs. (See Program No. 3 , above)

2. Lower the out-of-pocket maximums (see Program No. 3,
above)

3. Wipe out patient liability if a claim is denied (see Program
No. 4 , above)

4. Let the uninsured pay 
Medicare rates for hospital care,  
They might still have medical debt, but much less of it . Chargemaster
billing would be outlawed.

5. All debts in excess of 20 per cent of household income
should be forgiven.  The federal
government could pay hospitals perhaps twenty cents on the dollar for their
largest patient debts. Bills will be recalculated based on the Medicare rate at
the time.

6. All debts over seven years old should be forgiven. (and
never, ever, sold to collection agencies)

7. Surprise medical bills must be forgiven.

8. In case of emergencies, no balance bills are allowed.
What the insurance company pays is all the hospital and the doctors will get…..
basically this is “mandatory assignment”.

9. No lawsuits, liens, or attorney collection fees can be
permitted. The state should not be complicit in immiserating its own citizens.

Program No. 6  – Enforce Consumer Laws That Already Exist

1.
Out-of-network medical bills are already illegal…..

Insurers sell policies by claiming that certain hospitals
are in their network. Hospitals then boost their admissions by convincing
patients they are in the insurer’s network. Hoewever, when the unwary patient
gets surprised at billing time, it turns out these claims weren’t exactly true.

This is fraud; the Federal Trade Commission could punish these offenses right now.

2. Chargemaster bills to emergency patients are
already illegal..

When an actual contract cannot be formed – as in medical
emergencies – the courts have a long history of constructive intervention. The
doctrine of quasi-contract  would limit
charges to the amounts that are actually and customarily paid to and accetpted
by hospitals.

Courts can force
hospitals to accept an average market price right now, versus the dishonest and
opportunistic chargemaster rates, 

3.  Predatory
pricing for drugs could already be 
subject to antitrust enforcement

The antitrust laws are directed against the  harmful conduct  that  
extracts money from consumers, and gives it to producers for no other
reason than they are in a position to take it. 

 In hesitating to use antitrust against
excessive pricing drugs, the United States is an international outlier.
Governments outside the United States are using theirr antitrust laws to rein
in excessive drug pricing as an abuse of dominance. Even a conservative
position on antitrust would allow for any legal actions against drug companies.


SUMMARY

We will not achieve a more equitable health system
without new laws and federal action.

As noted by Austin Frakt:

“Now we have conservatives advocating for “SwissCare”,
while ignoring that Switzerland has an
individual mandate, more regulations, price fixing, and
lower caps on out of pocket spending
.

Also  you see
conservatives advocating for
Singapore’s health care system without
any real understanding of it. Singapore’s system has massive subsidies for
nursing homes, rehabilitation care, and home-based care. It requires mandatory
savings – 36% of wages spread over various accounts. The government also
provides a basic level of care that’s heavily, heavily subsidized. And here’s
the kicker – it relies on tons of government intervention in the market to keep
costs down. They use centrally planned and fixed budgets, they control the
acquisition of new technology, they regulate the number of students and
physicians, they use purchasing power to buy drugs more cheaply, and they have
an employer mandate for foreign workers.”

Eventually we may get it into our thick American skulls
that ‘forced savings’ and mandatory insurance are good things. Derek Thompson
commented on pensions  and health care in
The Atlantic,,,,,

“In a world obsessed with the wizardry of behavioral nudges, perhaps policymakers should consider putting away the magic wand and just do the paternalistic thing: Force people to save more, by expanding Social Security or by creating new forced savings policies. It should be harder for Americans to not have financial security when they retire. Indeed, the countries that finish above the U.S. in retirement security, like Switzerland and Norway, not only have much higher taxes but also benefit from the availability of public-health options and cheaper education in their prime-age years, which means they don’t have to spend as much out of pocket on insurance and college. In Germany, social-insurance programs provide for medical care and an even more substantial level of retirement pension. It sounds counterintuitive and nearly paradoxical, but maybe the only way to make Americans richer in the long run is to take more money away from them.”


Bob Hertz is a retired insurance broker. He learned about health care from Uwe Reinhardt, Joseph White, Dr. Robert Evans, and George Halvorson a fellow Minnesotan.

from The Health Care Blog https://ift.tt/32Gr6S1

Improving the Affordable Care Act Markets (Part 1)

By JONATHAN HALVORSON, PhD

With each passing year, the Affordable Care Act becomes
further entrenched in the American health care system. There are dreams on both
the far left and far right to repeal and replace it with something they see as
better, but the reality is that the ACA is a remarkable achievement which will
likely outlast the political lifetimes of those opposing it. Future
improvements are more likely to tweak the ACA than to start over from scratch.

A critical part of making the ACA work is for it to support
healthy, competitive and fair health insurance markets, since it relies on them
to provide health care benefits and improve access to care. This is
particularly true for insurance purchased by individuals and small employers,
where the ACA’s mandates on benefits, premiums and market structure have the
most impact. One policy affecting this dynamic that deserves closer attention
is risk adjustment, which made real improvements in the fairness of these
markets, but has come in for accusations that it has undermined competition.

Risk adjustment in the ACA works by compensating plans with
sicker than average members using payments from plans with healthier members.
The goal is to remove an insurer’s ability to gain an unfair advantage by
simply enrolling healthier people (who cost less). Risk adjustment leads insurers
to focus on managing their members’ health and appropriate services, rather
than on avoiding the unhealthy. The program has succeeded enormously in bringing
insurers to embrace enrolling and retaining those with serious health
conditions.

This is something to celebrate, and we should not go back to
the old days in which individuals or small groups would be turned down for
health insurance or charged much higher prices because they had a history of
health issues. However, the program has also had an undesired effect in many states:
it further tilted the playing field in favor of market dominant incumbents.

The national competitive picture has gotten worse since the ACA was
passed. Today the top three insurers enroll at least 80% of the individual
market in 37 states (up from 33 states in 2011) and the top three insurers
enroll 80% of the small group market in 41 states (up from 37 states in 2011).
In the small group market, the number of insurers enrolling over 1,000 lives declined
nationally from 506 in 2012 to 409 in 2016. Even more starkly, 19 of the 23
co-op insurers created with funding from the ACA are now defunct. These co-ops
failed for a number
of reasons, one of which was the annual risk adjustment payments
they had to make to insurers that had enrolled the majority in their local
markets for decades, mostly BlueCross/BlueShield plans. Since there are many
factors at work here, it is natural to ask how risk adjustment could be
implicated.

There are at least three potential problems:

  • Some plans have advantages in maximizing risk
    scores, which may not reflect differences in true underlying member population
    risk or lead to better care;
  • The risk adjustment model makes no allowance for
    plan size when one or two insurers dominate a market; and
  • The risk adjustment model does not account for the
    fact that plan designs with tighter cost management methods are often avoided by
    less healthy people, creating a moral hazard.

Regarding the first point, risk scoring has become its own
cottage industry, to which plans devote substantial resources out of necessity
(if you don’t find and record as many risks as your peers, you in effect pay your
competitors for each risk you haven’t found). The effort must be repeated each
year, or else the diagnoses cannot be counted towards the risk score. The risk
coding must document a care plan, but the reward amount is independent of any additional
services delivered. Plans with more sophisticated data mining and outreach
operations to confirm diagnoses will receive higher risk scores than plans with
less sophisticated operations, even if underlying health conditions are the
same. In addition, larger plans with long-tenured membership have an advantage,
since they have multiple years of diagnosis and claims data to analyze for
eligible conditions and identify likely diagnoses to confirm. Also, when large
plans receive new members, the individuals are more likely to already be in
their databases from a previous enrollment than is the case for small plans.

Even if a plan can purchase outside services to mine the
data and find these health conditions, having every insurer do it independently,
over and over each year, does not appear to be the most efficient use of
resources and it creates winners and losers based on access to data.

A very large gap in plan size can create still more issues. In
Alabama, the local BlueCross/BlueShield plan has long been the dominant insurer.
The ACA helped drive
its share of the small group market from 90% up to 97%, while the number of
competitors dropped from six to three. Responding to its crisis, Alabama requested
and received a 50% reduction in the size of risk adjustment transfers.

Part of the problem in situations like this is simple math:
risk adjustment transfers are based on each insurer’s deviation from the
average statewide risk score. A larger insurer will always be more insulated
from random variance in risk, and a very large insurer will necessarily drive
the statewide average to be much closer to its risk level than to its
competitors. Consider an Alabama-like example in which a large insurer has 90% market
share and a 10% higher risk score than the average of its competitors. Because
the number of enrollees in each plan matters when calculating the average, the statewide
average risk score average will be only 1% below the large insurer’s risk level—but
9% above competitors’. To oversimplify a bit, the small insurers would be
forced to pay 9% of their premium to the giant.

Of course, if the small insurers had members with 10% higher
risk in this example, they would receive 9% in premium while the dominant
insurer would pay 1% of its premium…though this rarely happens. This is for several
reasons, such as the tendency of the dominant plan to have longer tenured
members as the “blue chip” plan in an area, or as mentioned above the advantage
of having a larger data repository of state residents. Both of these make it
easier to capture every diagnosis. In addition, these plans often have older and
sicker members than small plans, which tend to have smaller networks, engage in
more active cost control measures, and be less familiar names, which older and
sicker consumers select less often. There is a selection
bias
of higher risk members towards plan designs which do the least to
control total costs—at least, with respect to measures that matter in the
selection process. These cost-inflating plans include more of out-of-network
benefits and more high-cost providers in their networks, and have lower
utilization management.

This point applies regardless of the size of the plan, and
occurs even across a single insurer’s products. For example, in Pennsylvania
Independence Blue Cross operates under two ACA plan IDs, one of which is for
its Keystone HMO (think smaller network, no out-of-network benefit, referrals
to see a specialist) and the other for its PPO (think no referrals and extensive
out-of-network benefits). The HMO owed $74M, while the PPO received $82M
in statewide risk adjustment payments in 2018. Essentially, risk adjustment in
its current form undermines important cost reduction strategies. Since rates
are generally required to be actuarially justified after rate adjustment is
taken into account, this forces the HMO to have higher rates than it otherwise
would have, suppresses enrollment in the plan trying to reduce costs, and
subsidizes those who enroll in the more inflationary plan.

There are also difficulties faced by smaller plans when competing
in an oligopolistic market: established giants have market power, deep
relationships with providers, employers and brokers, and high brand recognition
and familiarity. Going back to the case of an insurer with 90% of the market
and 10% higher risk score, a non-dominant insurer could not be expected to
price its products 9% higher to cover the ACA risk transfer cost. These plans
generally have to price low to grow, and forcing the premium higher to reflect
the statewide average health care cost may cause a plan to lose what little
business it has. Despite widespread belief to the contrary, health insurance is
a low margin business, with profits typically in the range of 3-5%. A
consistent transfer amount anywhere near 9% can wreak havoc, and under the ACA
risk adjustment program the transfers are sometimes much higher. Many of these
points apply not only to very small insurers, but to larger insurers that have
traditionally had a small or no presence in a given market (such as an insurer
that has had a presence in Medicare Advantage but seeks to expand in the ACA
individual or small group market).

New York State couldn’t be more different than Alabama in many
ways, but it is undergoing a similar dynamic. In the small group market, one
company, UnitedHealth, has long been by far the largest player, with roughly half
of the statewide small group enrollment and over 70% of the greater NYC market.
Inspired by the ACA, two new insurers (Health Republic and CareConnect) initially
made a splash and were able to grab market share. They enrolled everyone they
could…which ended up being disproportionately younger, healthier people willing
to switch plans for something new and unfamiliar in order to save some money. Other
plans in the market have attempted to grow by curating the network to a select
group of providers to reduce premium, and in the process have also been left
with a healthier population.

To be clear, risk adjustment is critically important to
balance out differences that arise from some consumer preferences. For example,
high deductible plans tend to attract healthier enrollees who don’t expect to
use their insurance. Without risk adjustment, these plans would become even
cheaper than they already are, while more comprehensive plans that attract sicker
members would get disproportionately more expensive, setting off a downward
spiral that pushes more and more people into plans that have the least
benefits. Risk adjustment is a fantastic way to prevent this sort of
self-destruction of insurance markets. However, in other cases where risk
levels differ due to preferences, such as an aversion among older and sicker
people to unfamiliar plans or narrow networks, forcing insurers to pay a
penalty that completely compensates for preferences can be harmful to
innovation and the public
interest
. It is still important to allow for some risk adjustment in
these cases, but fully compensating for the selection bias can create a
perpetual penalty for types of plans that are actually helpful to coordinate
care and control costs.

Risk adjustment is by no means the only systemic issue that has
caused companies to exit the market in almost every state, but it has compounded
other problems and persists year after year. For the smaller plans still
competing in Alabama and downstate New York, nearly all are facing large annual
risk transfers to the dominant plans. Some insurers are still losing 20% or
more of their total premium in 2019, well beyond what they could make up for
with premium hikes without losing membership. Meanwhile, United alone received
about one billion dollars in risk adjustment transfers in New York from 2014 to
2018.
Less extreme versions of these imbalances occur in state after state, from New
Mexico to Illinois to Vermont.

The bottom line is that risk adjustment is a crucial tool to direct the focus of health plans and improve fairness and stability, but rewarding insurers that are better at identifying health conditions independent of overall quality and health outcomes, and correcting for all of the differences in risk scores in a state, can misdirect focus and undermine the ability of innovative insurers and progressive products to compete in the marketplace. More on this, and on what a solution could look like, in a following post.

Jonathan Halvorson is a Senior Healthcare Consultant at Sachs Policy Group and has a long-term interest in the transformative potential of technology on the health care system

from The Health Care Blog https://ift.tt/2pEmDkf