How AI Could Revolutionize Healthcare – and Risks to Consider

May 8, 2023 by

Not a day goes by that a new way to use artificial intelligence (AI) fails to make the daily headlines. AI-powered tools seem to be making their mark on every business sector, including the healthcare industry.

AI opportunities in healthcare — such as helping to identify patient needs, documenting patient notes, and even diagnosing certain disease — seem endless. While AI brings exciting opportunities, it also has risks.

Digital tools in healthcare have skyrocketed since the start of the pandemic.

The number of telehealth services increased dramatically — 15 times the pre-pandemic level, from 2.1 million the year prior to 32.5 million in the 12 months from March 2020 to February 2021, according to data from the Government Accountability Office (GAO).

Healthcare executives are increasingly prioritizing digital automation technologies, according to a recent report from Sage Growth Partners showing that 90% of healthcare executives had an AI or automation strategy in place today. That percentage was up from 53% in 2019. The same study showed that 76% of respondents said automation has become more important because it can help patients recuperate faster by cutting wasteful spending and improving efficiency.

“Everyone is talking about AI but in healthcare specifically we’re seeing these tools being integrated like in a variety of different sectors,” Ellie Saunders, healthcare team leader, U.S. and Canada at CFC, told Insurance Journal.

The biggest sector of CFC’s digital healthcare portfolio is telemedicine — largely due to its reliance on well-established and widely accessible technology. However, AI has seen 32% growth in 2021, Saunders said.

Artificial intelligence is now being used to help in triaging some patient conditions, most commonly by diagnosing basic illnesses via a chatbot function. Another way AI is helping doctors is with medical scribing, which is an AI-powered tool that provides doctors with automated transcribing of patients’ comments while the doctor is consulting with that patient. This digital tool “essentially helps to streamline their workflows and efficiency, enabling them to focus on the actual practice of medicine rather than spending most of the time documenting and writing up notes,” she said.

One concern that underwriters worry about is that the medical scribe gets things wrong, Saunders said. She said some research shows that AI scribing doesn’t always correctly transcribe “uh-mmm” terms instead of “yes” responses.

“Obviously that can mean a lot in a conversation, but if the scribe doesn’t pick that up, it can completely alter their medical record,” she added. “So, doctors are still having to read through the scribe notes just to ensure it is in line with what they’ve actually encountered.”

But medical records are a logical place to begin because clinicians can quickly identify where AI-produced results were derived, according to Dr. Greg Ator, chief medical informatics officer at University of Kansas Health System, who is part of the team implementing generative AI technology at the academic health system to aid clinician note taking. Doctors can listen to a visit recording again if the AI misses valuable information.

Right now, it’s not the most efficient way to take medical notes but Saunders says the hope is to make these AI-tools better in the future.

But there are barriers to improving some AI-powered technologies going forward.

Pete Reilly, practice leader and chief sales officer of global insurance brokerage Hub International’s North American healthcare practice, says while the healthcare industry will become more dependent on AI in the future simply because the technology can “handle all those data points so much more quickly and efficiently than the human brain,” the challenge will be with the data itself. It’s the old analogy of “garbage in, garbage out,” he said. If there’s any malfunction along the line, for example a wrong computer code or inaccurate data, then everything else goes wrong with it, he said.

Even so, Reilly sees great opportunity for the sector’s use of AI-powered tools in the future. “Healthcare is an inefficient system by design. The human body is inefficient, and in many respects, they’re not the same,” he said. “But having said all that, I do see a continued use and need of use in healthcare,” he added. “AI will let us capture and begin to understand massive quantities of data more quickly, and that hopefully leads to long term better medicine.”

One hurdle to expanding AI use is patient acceptance. A Pew Research Center survey conducted in December found 60% of adult U.S. patients would feel uncomfortable if their healthcare provider relied on AI for their medical care. Less than a third felt the quality of their care would increase as AI was implemented.

Another hurdle lies in capturing accurate data from all patients. Some research shows that current medical data may be lacking in terms of diversity among various ethnic groups.

While doctors have used several different algorithms to try to capture the true risk of stroke for years, including newer models that use machine learning, a new analysis, led by researchers at Duke University School of Medicine, found that with all models studied — ranging from simpler algorithms based on self-reported risk factors to novel machine learning models — the accuracy of predicting a stroke was worse for Black men and women than their white peers. The Duke researchers concluded that Black Americans — who have a much higher probability of suffering from a stroke — are also less likely to get an accurate prediction of their stroke risk.

“The data that you get out of something is really only as good as the data you put in and that’s probably one of our biggest concerns as insurers in the space,” Saunders said. “We as underwriters want to make sure that this data set that is providing diagnostics is based off real data and that they’ve got enough of good data, for it to not be biased data,” she said.

“Certainly, hospitals think that AI can be helpful in delivery of care,” said J. Kevin Carnell, executive chairman of CAC Specialty’s newly launched healthcare division. “It’s obviously a very powerful tool that can run all sorts of scenario projections to help you diagnose a patient or uncover a potential issue on a patient, so that’s great. But as we’ve seen, AI can be wrong,” he said. “It’s ultimately the care provider who has the liability risk, so I don’t think fundamentally that changes. I really don’t know how it’s going to be used or how risky it’s going to be, but people are starting to talk about it.”

Overall, CFC’s Saunders believes the healthcare industry is going in the right direction to correct concerns over AI data and its use. “There are controls that are being implemented and it’s definitely an area which is going to revolutionize the healthcare industry,” she said.