Health in 2 Point 00, Episode 96 | Amazon Care, Echo, OneDrop & more

Today on Health in 2 Point 00, Jess and I catch up on loads of news in health tech. In this episode, Jess asks me about Amazon Care, doing telehealth, house calls, urgent care, drug delivery for their employees—could it change health delivery? Also, Echo raises $20 million for their smart stethescope; Bayer leads a $40 million round for OneDrop’s blood glucose meter; GoodRx buys telehealth company HeyDoctor; Rock Health investing $10M in InsideRx, and an undisclosed amount to Arine; and Peloton IPO’s today and everyone’s looking at it as a healthcare company (but no, it’s not). We end on some gossip, so tune in. —Matthew Holt

from The Health Care Blog https://ift.tt/2mmatLv

The Opportunity in Disruption, Part 4: Success Strategies for Provider and Payer CFOs

By JOE FLOWER

The system is unstable. We are already seeing the precursor waves of massive and multiple disturbances to come. Disruption at key leverage points, new entrants, shifting public awareness and serious political competition cast omens and signs of a highly changed future.

So what’s the frequency? What are the smart bets for a strategic chief financial officer at a payer or provider facing such a bumpy ride? They are radically different from today’s dominant consensus strategies. In this five-part series, Joe Flower lays out the argument, the nature of the instability, and the best-bet strategies.

Providers:

  • Cost. Get serious control of your costs. Most providers I talk to are dismissive. “Yeah, yeah, you’ve been saying that for 10 years. We’re on it, believe me.” But I don’t believe them. Observation of the industry makes it clear that most healthcare providers have not gone after costs with nearly the ferocity of other industries. Some providers that I have worked with and interviewed over recent years have gotten their full cost of ownership down to a level where they can survive on Medicaid rates (that’s not a typo, I mean Medicaid) and build bigger, stronger systems at the same time. They have proven it can be done. But this is far from the industry norm.
    In the new competition, getting your costs seriously down is not the way to win. It’s just the price of admission. In the new competition, any entity that can deliver any particular service at a lower price will steal that business from you. You need to be that entity.
  • Overtreatment and waste. Do a deep and honest analysis of how much of your book of business is actually not effective, not necessary, does not deliver value or satisfaction to the customer—because that book of business is likely to wither away under alternative financing arrangements. Keep in mind that among various studies, the low estimate of how much of healthcare delivers no real value to the customer (an estimate by doctors about their own specialties) is 20%. The high end (in a PricewaterhouseCoopers study) is 54.5%. Most studies and estimates cluster around 35%.
  • Abandon monopolistic practices. An “all system” fee-for-service contract hidden from public scrutiny is smart, but not wise. It may help this year, maybe next year, but in the longer run it creates vulnerability to political attack, legislation, and lawsuits, and also to shifts in the market. A semi-monopoly position allows you to charge a premium for your products, but it locks you into that ability to charge a premium. As the market shifts and finds ways around paying high prices, you will find that you will be forced to compete at the lower prices, but you will not have developed the partnerships, the strategies, and the product lines to do that. If there is no competition in your area, then compete with yourself to forestall lower-cost competition developing.
  • Birth your competition. The growth area in the new healthcare competition will be: “How to cut into the hospital’s bottom line by keeping people out of the hospital.” The most competitive business models in healthcare will be in the business of cutting off hospitals’ revenue streams upstream.
    Get into this business model, even if you are a hospital. Especially if you are a hospital. Get into this business and get better at it than any potential competition. Create high-performance bundled programs with deeply managed costs well below the industry median. Get into contracts with large buyers for particular niches in which you give financial and quality performance guarantees. If you can’t guarantee that you can drop the cost and improve the quality, you will lose that business to someone else who can show that track record and give that guarantee.
     
    Put yourself into all the business models that are disrupting you, such as outpatient clinics, community clinics, mobile vans, mini-hospitals, stand-alone emergency departments, onsite clinics, personalized management of complex cases, direct-pay primary care and others. Make these businesses able to compete for market share by unshackling them from hospital pricing and facilities fees.
     
    Risky? Sure, but you do not want to end up being just the super-expensive job shop at the end of the line that every single customer and buyer is making every effort to avoid.

Success Strategies for Payer CFOs

Read the list of strategies that healthcare providers
have to go through to survive and serve their market. Think through all the
assets, connections, and information that you have, and ask yourself two
questions:

  • How can we help providers succeed at those strategies?
  • How can we foster competition against them in all those strategies?

Head toward TPA: Employers are frustrated and headed toward disintermediation. So be in the business of helping them realize the full potential of self-funded healthcare. Third-party administration (TPA) can help employers, unions, pension plans and other buying agents to reduce their costs and then push that savings directly toward increased value for their healthcare consumers.

Aggressively push all solutions that are not fee-for-service. Use your research expertise and bargaining power to bring bundled services, medical tourism, full capitation, mini-capitation, reference pricing, onsite and near-site direct-pay clinics, as well as the innovations mentioned above. 

Ignore borders. In some domains, the best value for your
customers may be found in Mexico, Canada, Europe, the Caribbean, or even South
Asia. U.S. healthcare providers need to be in competition with these foreign
providers, and increasingly are. Help your customers find that value. They
are already seeking it on their own.

Guarantee the financial security of your customers. It does not help you if your customers feel like they are in an adversarial relationship with you, that you and the healthcare providers are lying in wait to catch them out and cost them vast sums. Today, most of your customers fear exactly that. If your customers know that you will defend them vigorously against surprise billing, balance billing, and hidden out-of-network costs, you have a superior product. Get aggressive in your negotiations with providers to get contracts that prevent them from using these egregious practices on your customers. Go to bat for your customers. Honor your contracts. Honor the promise of the happy, healthy faces on your billboards, not the “gotcha” traps in the fine print.

Fund population health and healthy communities, in cooperation with existing healthcare providers, but even in competition with them if necessary. If you put yourself in a position to do better financially if your covered population is healthier, then the opioid crisis, rampant diabetes, and other population health problems turn into huge opportunity spaces.
 
We can already see this happening in various emerging Medicaid programs that are becoming more responsive to the social determinants of health. In a number of states Medicaid Managed Care Organizations are required to build their plans not just on providing medical care, but by in one way or another offering help for housing, transportation, nutrition and access to healthy food, social contacts, a whole range of the things that we know make a huge difference to our health.

Work with the disruptive
new entrants. Bring your data analytics to the next level
anduse those analytics, your customer base, and your financial power to
catalyze the disruption that is coming.

Are
there ways that payers and providers can work together in coopetition? That’s
the subject of Part 5.

from The Health Care Blog https://ift.tt/2nVjYBE

AI competitions don’t produce useful models

By LUKE OAKDEN-RAYNER

A huge new CT brain dataset was released the other day, with the goal of training models to detect intracranial haemorrhage. So far, it looks pretty good, although I haven’t dug into it in detail yet (and the devil is often in the detail).

The dataset has been released for a competition, which obviously lead to the usual friendly rivalry on Twitter:

Of course, this lead to cynicism from the usual suspects as well.

And the conversation continued from there, with thoughts ranging from “but since there is a hold out test set, how can you overfit?” to “the proposed solutions are never intended to be applied directly” (the latter from a previous competition winner).

As the discussion progressed, I realised that while we “all know” that competition results are more than a bit dubious in a clinical sense, I’ve never really seen a compelling explanation for why this is so.

Hopefully that is what this post is, an explanation for why competitions are not really about building useful AI systems.


DISCLAIMER: I originally wrote this post expecting it to be read by my usual readers, who know my general positions on a range of issues. Instead, it was spread widely on Twitter and HackerNews, and it is pretty clear that I didn’t provide enough context for a number of statements made. I am going to write a follow-up to clarify several things, but as a quick response to several common criticisms:
  • I don’t think AlexNet is a better model than ResNet. That position would be ridiculous, particularly given all of my published work uses resnets and densenets, not AlexNets.
  • I think this miscommunication came from me not defining my terms: a “useful” model would be one that works for the task it was trained on. It isn’t a model architecture. If architectures are developed in the course of competitions that are broadly useful, then that is a good architecture, but the particular implementation submitted to the competition is not necessarily a useful model.
  • The stats in this post are wrong, but they are meant to be wrong in the right direction. They are intended for illustration of the concept of crowd-based overfitting, not accuracy. Better approaches would almost all require information that isn’t available in public leaderboards. I may update the stats at some point to make them more accurate, but they will never be perfect.
  • I was trying something new with this post – it was a response to a Twitter conversation, so I wanted to see if I could write it in one day to keep it contemporaneous. Given my usual process is spending several weeks and many rewrites per post, this was a risk. I think the post still serves its purpose, but I don’t personally think the risk paid off. If I had taken even another day or two, I suspect I would have picked up most of these issues before publication. Mea culpa.

Let’s have a battle

748284__safe_animated_screencap_equestriagirls_rainbowrocks_spoiler-colon-rainbowrocks_sonatadusk_adagiodazzle_ariablaze_thedazzlings
Nothing wrong with a little competition.*

So what is a competition in medical AI? Here are a few options:

  • getting teams to try to solve a clinical problem
  • getting teams to explore how problems might be solved and to try novel solutions
  • getting teams to build a model that performs the best on the competition test set
  • a waste of time

Now, I’m not so jaded that I jump to the last option (what is valuable to spend time on is a matter of opinion, and clinical utility is only one consideration. More on this at the end of the article).

But what about the first three options? Do these models work for the clinical task, and do they lead to broadly applicable solutions and novelty, or are they only good in the competition and not in the real world?

(Spoiler: I’m going to argue the latter).


Good models and bad models

Should we expect this competition to produce good models? Let’s see what one of the organisers says.

ai competition3

Cool. Totally agree. The lack of large, well-labeled datasets is the biggest major barrier to building useful clinical AI, so this dataset should help.

But saying that the dataset can be useful is not the same thing as saying the competition will produce good models.

So to define our terms, let’s say that a good model is a model that can detect brain haemorrhages on unseen data (cases that the model has no knowledge of).

So conversely, a bad model is one that doesn’t detect brain haemorrhages in unseen data.

These definitions will be non-controversial. Machine Learning 101. I’m sure the contest organisers agree with these definitions, and would prefer their participants to be producing good models rather than bad models. In fact, they have clearly set up the competition in a way designed to promote good models.

It just isn’t enough.


Epi vs ML, FIGHT!

source
If only academic arguments were this cute

ML101 (now personified) tells us that the way to control overfitting is to use a hold-out test set, which is data that has not been seen during model training. This simulates seeing new patients in a clinical setting.

ML101 also says that hold-out data is only good for one test. If you test multiple models, then even if you don’t cheat and leak test information into your development process, your best result is probably an outlier which was only better than your worst result by chance.

So competition organisers these days produce hold-out test sets, and only let each team run their model on the data once. Problem solved, says ML101. The winner only tested once, so there is no reason to think they are an outlier, they just have the best model.

Not so fast, buddy.

Let me introduce you to Epidemiology 101, who claims to have a magic coin. 

fsnuw6sujx28bh5itvip

Epi101 tells you to flip the coin 10 times. If you get 8 or more heads, that confirms the coin is magic (while the assertion is clearly nonsense, you play along since you know that 8/10 heads equates to a p-value of <0.05 for a fair coin, so it must be legit).

tumblr_pkeicpjcfb1r65eeio5_r1_500

Unbeknownst to you, Epi101 does the same thing with 99 other people, all of whom think they are the only one testing the coin. What do you expect to happen?

If the coin is totally normal and not magic, around 5 people will find that the coin is special. Seems obvious, but think about this in the context of the individuals. Those 5 people all only ran a single test. According to them, they have statistically significant evidence they are holding a “magic” coin.

Now imagine you aren’t flipping coins. Imagine you are all running a model on a competition test set. Instead of wondering if your coin is magic, you instead are hoping that your model is the best one, about to earn you $25,000.

Of course, you can’t submit more than one model. That would be cheating. One of the models could perform well, the equivalent of getting 8 heads with a fair coin, just by chance.

Good thing there is a rule against it submitting multiple models, or any one of the other 99 participants and their 99 models could win, just by being lucky…

tumblr_mvn6uwbyep1shi48to5_250

Multiple hypothesis testing

The effect we saw with Epi101’s coin applies to our competition, of course. Due to random chance, some percentage of models will outperform other ones, even if they are all just as good as each other. Maths doesn’t care if it was one team that tested 100 models, or 100 teams.

Even if certain models are better than others in a meaningful sense^, unless you truly believe that the winner is uniquely able to ML-wizard, you have to accept that at least some other participants would have achieved similar results, and thus the winner only won because they got lucky. The real “best performance” will be somewhere back in the pack, probably above average but below the winner^^.

ai competition4

Epi101 says this effect is called multiple hypothesis testing. In the case of a competition, you have a ton of hypotheses – that each participant was better than all others. For 100 participants, 100 hypotheses.

One of those hypotheses, taken in isolation, might show us there is a winner with statistical significance (p<0.05). But taken together, even if the winner has a calculated “winning” p-value of less than 0.05, that doesn’t mean we only have a 5% chance of making an unjustified decision. In fact, if this was coin flips (which is easier to calculate but not absurdly different), we would have a greater than 99% chance that one or more people would “win” and come up with 8 heads!

That is what an AI competition winner is; an individual who happens to get 8 heads while flipping fair coins.

Interestingly, while ML101 is very clear that running 100 models yourself and picking the best one will result in overfitting, they rarely discuss this “overfitting of the crowds”. Strange, when you consider that almost all ML research is done of heavily over-tested public datasets …

So how do we deal with multiple hypothesis testing? It all comes down to the cause of the problem, which is the data. Epi101 tells us that any test set is a biased version of the target population. In this case, the target population is “all patients with CT head imaging, with and without intracranial haemorrhage”. Let’s look at how this kind of bias might play out, with a toy example of a small hypothetical population:

brain bleeds 1

In this population, we have a pretty reasonable “clinical” mix of cases. 3 intra-cerebral bleeds (likely related to high blood pressure or stroke), and two traumatic bleeds (a subdural on the right, and an extradural second from the left).

Now let’s sample this population to build our test set:

brain bleeds 2

Randomly, we end up with mostly extra-axial (outside of the brain itself) bleeds. A model that performs well on this test will not necessarily work as well on real patients. In fact, you might expect a model that is really good at extra-axial bleeds at the expense of intra-cerebral bleeds to win.

But Epi101 doesn’t only point out problems. Epi101 has a solution.


So powerful

There is only one way to have an unbiased test set – if it includes the entire population! Then whatever model does well in the test will also be the best in practice, because you tested it on all possible future patients (which seems difficult).

This leads to a very simple idea – your test results become more reliable as the test set gets larger. We can actually predict how reliable test sets are using power calculations.

image4

These are power curves. If you have a rough idea of how much better your “winning” model will be than the next best model, you can estimate how many test cases you need to reliably show that it is better.

So to find out if you model is 10% better than a competitor, you would need about 300 test cases. You can also see how exponentially the number of cases needed grows as the difference between models gets narrower.

Let’s put this into practice. If we look at another medical AI competition, the SIIM-ACR pneumothorax segmentation challenge, we see that the difference in Dice scores (ranging between 0 and 1) is negligible at the top of the leaderboard. Keep in mind that this competition had a dataset of 3200 cases (and that is being generous, they don’t all contribute to the Dice score equally).

pneumo challenge

So the difference between the top two was 0.0014 … let’s chuck that into a sample size calculator.

sample size 1

Ok, so to show a significant difference between these two results, you would need 920,000 cases.

But why stop there? We haven’t even discussed multiple hypothesis testing yet. This absurd number of cases needed is simply if there was ever only one hypothesis, meaning only two participants.

If we look at the leaderboard, there were 351 teams who made submissions. The rules say they could submit two models, so we might as well assume there were at least 500 tests. This has to produce some outliers, just like 500 people flipping a fair coin.

Epi101 to the rescue. Multiple hypothesis testing is really common in medicine, particularly in “big data” fields like genomics. We have spent the last few decades learning how to deal with this. The simplest reliable way to manage this problem is called the Bonferroni correction^^.

The Bonferroni correction is super simple: you divide the p-value by the number of tests to find a “statistical significance threshold” that has been adjusted for all those extra coin flips. So in this case, we do 0.05/500. Our new p-value target is 0.0001, any result worse than this will be considered to support the null hypothesis (that the competitors performed equally well on the test set). So let’s plug that in our power calculator.

sample size 2

Cool! It only increased a bit… to 2.6 million cases needed for a valid result :p

Now, you might say I am being very unfair here, and that there must be some small group of good models at the top of the leaderboard that are not clearly different from each other^^^. Fine, lets be generous. Surely no-one will complain if I compare the 1st place model to the 150th model?

sample size 3

So still more data than we had. In fact, I have to go down to the 192nd placeholder to find a result where the sample size was enough to produce a “statistically significant” difference.

But maybe this is specific to the pneumothorax challenge? What about other competitions?

In MURA, we have a test set of 207 x-rays, with 70 teams submitting “no more than two models per month”, so lets be generous and say 100 models were submitted. Running the numbers, the “first place” model is only significant versus the 56th placeholder and below.

In the RSNA Pneumonia Detection Challenge, there were 3000 test images with 350 teams submitting one model each. The first place was only significant compared to the 30th place and below.

And to really put the cat amongst the pigeons, what about outside of medicine?

imagenet

As we go left to right in ImageNet results, the improvement year on year slows (the effect size decreases) and the number of people who have tested on the dataset increases. I can’t really estimate the numbers, but knowing what we know about multiple testing does anyone really believe the SOTA rush in the mid 2010s was anything but crowdsourced overfitting?


So what are competitions for?

b26

They obviously aren’t to reliably find the best model. They don’t even really reveal useful techniques to build great models, because we don’t know which of the hundred plus models actually used a good, reliable method, and which method just happened to fit the under-powered test set.

You talk to competition organisers … and they mostly say that competitions are for publicity. And that is enough, I guess.

AI competitions are fun, community building, talent scouting, brand promoting, and attention grabbing.

But AI competitions are not to develop useful models.


* I have a young daughter, don’t judge me for my encyclopaedic knowledge of My Little Pony.**

** not that there is anything wrong with My Little Pony***. Friendship is magic. There is just an unsavoury internet element that matches my demographic who is really into the show. I’m no brony.

*** barring the near complete white-washing of a children’s show about multi-coloured horses.

^ we can actually understand model performance with our coin analogy. Improving the model would be equivalent to bending the coin. If you are good at coin bending, doing this will make it more likely to land on heads, but unless it is 100% likely you still have no guarantee to “win”. If you have a 60%-chance-of-heads coin, and everyone else has a 50% coin, you objectively have the best coin, but your chance of getting 8 heads out of 10 flips is still only 17%. Better than the 5% the rest of the field have, but remember that there are 99 of them. They have a cumulative chance of over 99% that one of them will get 8 or more heads.

^^ people often say the Bonferroni correction is a bit conservative, but remember, we are coming in skeptical that these models are actually different from each other. We should be conservative.

^^^ do please note, the top model here got $30,000 and the second model got nothing. The competition organisers felt that the distinction was reasonable.

Luke Oakden-Rayner is a radiologist (medical specialist) in South Australia, undertaking a Ph.D in Medicine with the School of Public Health at the University of Adelaide. This post originally appeared on his blog here.

from The Health Care Blog https://ift.tt/2mbXpbC

The Most Expensive Data in the US & Why we’re NOT Using It | Atul Butte, UC Health

By JESSICA DAMASSA, WTF HEALTH

When you ask the ‘big data guy’ at a massive health system what’s wrong with EMRs, it’s surprising to hear that his problem is NOT with the EMRs themselves but with the fact that health systems are just not using the data their collecting in any meaningful way. Atul Butte, Chief Data Scientist for University of California Health System says interoperability is not the big issue! Instead, he says it’s the fact that health systems are not using some of the most expensive data in the country (we are using doctors to data entry it…) to draw big, game-changing conclusions about the way we practice medicine and deliver care. Listen in to find out why Atul thinks that the business incentives are misaligned for a data revolution and what we need to do to help.

Filmed at Health Datapalooza in Washington DC, March 2019.

Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew Holt.

Get a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health

from The Health Care Blog https://ift.tt/2nyenBb

Ordering Tests Without Using Words: Are ICD-10 and CPT Codes Bringing Precision or Dumbing Us Down?

By HANS DUVEFELT, MD

The chest CT report was a bit worrisome. Henry had “pleural based masses” that had grown since his previous scan, which had been ordered by another doctor for unrelated reasons. But as Henry’s PCP, it had become my job to follow up on an emergency room doctor’s incidental finding. The radiologist recommended a PET scan to see if there was increased metabolic activity, which would mean the spots were likely cancerous.

So the head of radiology says this is needed. But I am the treating physician, so I have to put the order in. In my clunky EMR I search for an appropriate diagnostic code in situations like this. This software (Greenway) is not like Google; if you don’t search for exactly what the bureaucratic term is, but use clinical terms instead, it doesn’t suggest alternatives (unrelated everyday example – what a doctor calls a laceration is “open wound” in insurance speak but the computer doesn’t know they’re the same thing).

So here I am, trying to find the appropriate ICD-10 code to buy Henry a PET scan. Why can’t I find the diagnosis code I used to get the recent CT order in when I placed it, months ago? I cruise down the list of diagnoses in his EMR “chart”. There, I find every diagnosis that was ever entered. They are not listed alphabetically or chronologically. The list appears totally random, although perhaps the list is organized alphanumerically by ICD-10, although they are not not displayed in my search box, but that wouldn’t do me any good anyway since I don’t have more than five ICD-10 codes memorized.

Patients are waiting, I’m behind, the usual time pressure in healthcare.

Can’t find a previously used diagnosis. Search for “nonspecific finding on chest X-ray” and multiple variations thereof.

I see R93.89 – “abnormal finding on diagnostic imaging of other body structures”. Close enough, use it, type in exactly what the chief of radiology had said in his report. Move on. Next patient.

Several days later I get a printout of that order in my inbox with a memo that the diagnosis doesn’t justify payment for a PET scan. Attached to that is a multi page list of diagnoses that would work.

Frustrated, I go through the list. It’s another day, other patients are waiting. Eventually I come across R91.8 “other nonspecific finding of lung field” – not exactly pleura, but what the heck, close enough, let’s use that one.

Why is this – me hurriedly choosing the next best thing on a multipage printout, while my other patients are waiting – any more practical, accurate or fraud proof than having me describe in appropriate CLINICAL language what the patient needs and letting SOMEONE ELSE look for the darn code?

Here I am, trying to order what a radiologist told me to order, without having the tools to do it.

Next thing you know, Henry’s insurance will probably have some third party radiologist deny coverage because he disagrees with my radiologist, and I’ll be stuck in the middle…

Not quite what I thought I’d be doing. Who works for whom in healthcare?

Hans Duvefelt is a Swedish-born rural Family Physician in Maine. This post originally appeared on his blog, A Country Doctor Writes, here.

from The Health Care Blog https://ift.tt/2ny7qjz

Indigenous Medicine– From Illegal to Integral

Brooke Warren
Phuoc Le

By PHUOC LE, MD and BROOKE WARREN

In the 2020 Summer Olympics, we will undoubtedly see large, red circles down the arms and backs of many Olympians. These spots are a side-effect of cupping, a treatment originating from traditional Chinese medicine (TCM) to reduce pain. TCM is a globally used Complementary and Alternative Medicine (CAM), but it still battles its critics who think it is only a belief system, rather than a legitimate medical practice. Even so, the usage of TCM continues to grow. This led the National Institute of Health (NIH) to sponsor a meeting in 1997 to determine the efficacy of acupuncture, paving the way in CAM research. Today, there are now over 50 schools dedicated to teaching Chinese acupuncture in the US under the Accreditation Commission for Acupuncture & Oriental Medicine.

Image of Michael Phelps swimming in 2016 Rio Olympics after using TCM cupping. (Al Bello/Getty Images)

While TCM has seen immense growth and integration around the globe throughout the last twenty years, other forms of CAM continue to struggle for acceptance in the U.S. In this article we will focus on Native American/Indigenous traditional medical practices. Indigenous and non-Indigenous patients should not have to choose between traditional and allopathic medicine, but rather have them working harmoniously from prevention to diagnosis to treatment plan.

It was not until August of 1978 that federally recognized tribal members were officially able to openly practice their Indigenous traditional medicine (the knowledge and practices of Indigenous people that prevent or eliminate physical, mental and social diseases) when the American Indian Religious Freedom Act (AIRFA) was passed. Prior to 1978, the federal government’s Department of Interior could convict a medicine man to a minimum of 10 days in prison if he encouraged others to follow traditional practices.

It is difficult to comprehend that tribes throughout the U.S. were only given the ability to openly exercise their medicinal practices 41 years ago when the “healing traditions of indigenous Native Americans have been practiced on this continent for 12,000 years ago and possibly for more than 40,000 years.”[1]

Since the passage of AIRFA, many tribally run clinics and hospitals are finding ways to incorporate Indigenous traditional healing into their treatment plans, when requested by patients.

Examples include the Puyallup Tribe of Indians’ Salish Cancer Center in Washington and the Southcentral Foundation, a nonprofit Native-owned healthcare organization in Alaska. At the Salish Cancer Center, Native healers are brought in from various tribes around the US twice a month, allowing patients to adhere to their own tribal protocol and healers to have direct communication with oncologists at the center.

In 2011, the Southcentral Foundation Traditional Healing Clinic received the Indian Health Service Director’s Special Recognition Award for “demonstrating how tribal doctors, elders, and traditional healing practices can work side-by-side with Western medicine… for the ultimate benefit of Native patients, families, and communities.”

While we may be wondering how effective bringing Native healers is for patient outcomes, these data are seldom shared. Traditional healing is a very sacred practice, and for both the patient and healer, many tribal members choose not to share information publicly.

So rather than attempt to figure out the efficacy of integrating Indigenous traditional healing and allopathic medicine, we share a story about when they worked together synergistically:

On the Navajo Reservation in 1952, there was a tuberculosis (TB) epidemic. The TB rate for the Navajo population was ten times the rate of occurrences in the Caucasian population in the same geographic region. 302/100,000 Navajos had TB compared to 30/100,000 Caucasians. Dr. Walsh McDermott, from Cornell Medical College, heard about this emergency of untreated TB and organized to bring a group comprised of doctors and researchers to the reservation from Cornell Medical College. This came to be known as the Many Farms Demonstration Project.

An image taken while Dr. McDermott was on the Navajo Nation Reservation during the Many Farms Demonstration Project.

Unlike the federal government, Dr. McDermott welcomed traditional healers and community leaders into his work and acquainted them with the procedures being used.

This relationship proved beneficial to ending the epidemic because the traditional healer was able to solve a problem that the Western-trained doctors could not. When lightning struck near the hospital, Dr. McDermott’s patients began to flee because of their traditional belief that lightening brings misfortune. A Navajo medicine man was brought in to bless both the hospital and patients, and to appease the spirits. The patients, trusting their medicine man’s healing of the land and space, subsequently came back in because they were confident that there was no more danger. This made it so their TB treatment could begin again.

From a Western science perspective, there may not be many randomized controlled trials to prove the efficacy of Indigenous traditional medicine. The example above, though, clearly shows that the use of Western science in isolation would not have produced the cures it had sought. For allopathic medicine to have its intended effects in improving health outcomes of populations who use CAM, it must work harmoniously with CAM and respect it. As the Indian Health Service lays out in its policy, the “respect and acceptance [of] an individual’s private traditional beliefs become a part of the healing and harmonizing force within his/her life.”

[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2913884/

Internist, Pediatrician, and Associate Professor at UCSF, Dr. Le is also the co-founder of two health equity organizations, the HEAL Initiative and Arc Health. 

Brooke Warren is a Native American Studies major and recent graduate of UC Davis. She is currently an intern at Arc Health.

This post originally appeared on Arc Health here.

from The Health Care Blog https://ift.tt/2mJoRgo

Announcing Winners for the RWJF Innovation Challenges

SPONSORED POST

By CATALYST @ HEALTH 2.0

Three finalists for the Robert Wood Johnson
Foundation Home and Community Based Care and Social Determinants of Health
Innovation Challenges competed live at the Health 2.0 Conference on Monday,
September 16th! They demoed their technology in front of a captivated audience
of health care professionals, investors, provider organizations, and members of
the media. Catalyst is proud to announce the first, second and third place
winners.

Home and Community Based Care Innovation Challenge Winners

First Place: Ooney 

Second Place: Wizeview

Third Place: Heal 

Social Determinants of Health Innovation Challenge Winners

First Place: Social Impact AI Lab New York

Second Place: Community Resource Network 

Third Place: Open City Labs

For the SDoH Challenge, innovators
were asked to develop novel digital solutions that can help providers and/or
patients connect to health services related to SDoH. Over 110 applications were
submitted. For the Home and Community Based Care Challenge, applicants were
asked to create technologies that support the advancement of at-home or
community-based health care. Nearly 100 applications for Home and Community
Based Care Challenge were received. After the submission period ended, an
expert of over 50 judges evaluated the entries. Five semi-finalists from each
challenge were selected to advance to the next round and further develop their
solutions. The semi-finalists were evaluated again and the three finalists
chosen. 

To learn more about the Home and Community Based Care Innovation Challenge, click here

To learn more about the Social Determinants of Health Innovation Challenge, click here.

For further updates on the finalists of the RWJF SDoH and Home and Community Based Care Innovation Challenge and other programs, please subscribe to the Catalyst @ Health 2.0 Newsletter, and follow us on Twitter @catalyst_h20.

Catalyst @ Health 2.0 (“Catalyst”) is the industry leader in digital health strategic partnering, hosting competitive innovation “challenge” events and developing programs for piloting and commercializing novel healthcare technologies.

from The Health Care Blog https://ift.tt/2lFMbM3