Feb. 27, 2024 – When you message your health care provider about an appointment, a prescription refill, or to answer a question, is artificial intelligence or a person actually answering? In some cases, it’s hard to tell.
AI may be involved in your health care now without you realizing it. For example, many patients message their doctors about their medical chart through an online portal.
“And there are some hospital systems that are experimenting with having AI do the first draft of the response,” I. Glenn Cohen said during a webinar hosted by the National Institute of Health Care Management Foundation.
Assigning administrative tasks is a relatively low-risk way to introduce use of artificial intelligence in health care, said Cohen, and attorney and director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School in Boston. The technology can free up staff time now devoted to answering calls or messages about routine tasks.
But when the technology handles clinical questions, should patients be aware AI is generating the initial answer? Do patients need to fill out a separate consent form, or is that going too far?
What about when a doctor makes a recommendation based in part on AI?
Cohen shared an example. A patient and doctor are deciding which embryos from in vitro fertilization (IVF) to implant. The doctor makes recommendations based in part on molecular imagery and other factors revealed through AI or a machine learning system but doesn’t disclose it. “Is it a problem that your physician hasn’t told you?”
Where Are We on Liability?
Lawsuits can be a good way to measure how acceptable new technology is. “There have been shockingly few cases about liability for medical AI,” Cohen said. “Most of the ones we’ve actually seen have been about surgical robots where, arguably, it’s not really the AI that’s causing the issues.”
It is possible that cases are settled out of court, Cohen said. “But in general, from my own takeaway, is that people probably overestimate the importance of liability issues in this space, given the data. But still we should try to understand it.”
Cohen and colleagues analyzed the legal issues around AI in a 2019 viewpoint in the Journal of the American Medical Association. The bottom line for doctors: As long as they follow the standard of care, they’re probably safe, Cohen said. The safest way to use medical AI when it comes to liability it to use it to confirm decisions, rather than to try and use it to improve care.
Cohen cautioned that at some point in the future, using AI may become the standard of care. When and if that happens, the risk of liability could be for not using AI.
Insurers Adopting AI
Insurance company Guidewell/Florida Blue is already introducing AI and machine learning models into their interactions with members, said Svetlana Bender, PhD, the company’s vice president of AI and behavioral science. Models are already identifying plan members who could benefit from more tailored education, directing patients to health care settings other than emergency rooms for medical care when needed. AI can also make prior authorization happen more quickly.
“We’ve been able to streamline the reviews of 75% of prior authorization requests with AI,” Bender said.
The greater efficiency from AI could also translate to cost savings for the health care system overall, she said. “It’s estimated that we could see anywhere between $200 [billion] to $360 billion in savings annually.”
Handling the Complexity
Beyond managing administrative tasks and recommending more personalized interventions, AI could help providers, patients, and payers facing a fire hose of health care data.
“There’s been just an unprecedented and tremendous growth in the volume and complexity of medical and scientific data, and in volume and complexity of patient data itself,” said Michael E. Matheny, MD, director of the Center for Improving the Public’s Health through Informatics at Vanderbilt University Medical Center in Nashville.
“Really, we need help in managing all of this information,” said Matheny, who is also a professor of biomedical informatics, medicine, and biostatistics at Vanderbilt.
In most current applications, humans check AI output, whether it’s help with drug discovery, image processing, or clinical decision support. But in some cases, the FDA has approved AI applications that operate without a doctor’s interpretation, Matheny said.
Integrating Health Equity
Some experts are pinning their hopes on AI to speed up efforts to make a more equitable health care system. As algorithms are developed, the training data input into AI and machine learning systems needs to better represent the U.S. population, for example.
And then there is the drive toward more equitable access, too. “Do all patients who contribute data to the building the model get its benefits?” Cohen asked.