End-of-life decisions are difficult and distressing. Could AI help?

A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke. Her brain started to bleed. She underwent brain surgery, but her heart stopped beating. Sophie’s ordeal left her with significant brain damage. She was unresponsive; she couldn’t squeeze her fingers or open her eyes when asked, and she didn’t flinch…
End-of-life decisions are difficult and distressing. Could AI help?

Wendler has been working on ways to help surrogates make these kinds of decisions. Over 10 years ago, he developed the idea for a tool that would predict a patient’s preferences on the basis of characteristics such as age, gender, and insurance status. That tool would have been based on a computer algorithm trained on survey results from the general population. It may seem crude, but these characteristics do seem to influence how people feel about medical care. A teenager is more likely to opt for aggressive treatment than a 90-year-old, for example. And research suggests that predictions based on averages can be more accurate than the guesses made by family members.

In 2007, Wendler and his colleagues built a “very basic,” preliminary version of this tool based on a small amount of data. That simplistic tool did “at least as well as next-of-kin surrogates” in predicting what kind of care people would want, says Wendler.

Now Wendler, Earp and their colleagues are working on a new idea. Instead of being based on crude characteristics, the new tool the researchers plan to build will be personalized. The team proposes using AI and machine learning to predict a patient’s treatment preferences on the basis of personal data such as medical history, along with emails, personal messages, web browsing history, social media posts, or even Facebook likes. The result would be a “digital psychological twin” of a person—a tool that doctors and family members could consult to guide a person’s medical care. It’s not yet clear what this would look like in practice, but the team hopes to build and test the tool before refining it.

The researchers call their tool a personalized patient preference predictor, or P4 for short. In theory, if it works as they hope, it could be more accurate than the previous version of the tool—and more accurate than human surrogates, says Wendler. It could be more reflective of a patient’s current thinking than an advance directive, which might have been signed a decade beforehand, says Earp.

A better bet?

A tool like the P4 could also help relieve the emotional burden surrogates feel in making such significant life-or-death decisions about their family members, which can sometimes leave people with symptoms of post-traumatic stress disorder, says Jennifer Blumenthal-Barby, a medical ethicist at Baylor College of Medicine in Texas.

Some surrogates experience “decisional paralysis” and might opt to use the tool to help steer them through a decision-making process, says Kaplan. In cases like these, the P4 could help ease some of the burden surrogates might be experiencing, without necessarily giving them a black-and-white answer. It might, for example, suggest that a person was “likely” or “unlikely” to feel a certain way about a treatment, or give a percentage score indicating how likely the answer is to be right or wrong.