Many industry observers say artificial intelligence has the potential to dramatically change healthcare, but some analysts and leaders have expressed the need for further guardrails for AI. .
ECRI, an organization focused on patient safety, has placed AI among the top 10 medical technology hazards to watch in 2024. AI came in at number 5 on the list of problem areas.
Marcus Schabacker, MD, President and CEO of ECRI, said: Chief Healthcare Executive® When asked about his concerns about AI in the medical field, he said he could go on for days.
“We believe there is great potential with AI to benefit healthcare to make it more reliable and effective, but there are currently insufficient mechanisms in place to ensure safety. “We haven’t done that,” Schabacker said.
AI remains a hot topic at medical conventions, and medical leaders see great potential to improve patient diagnosis. But critics point out that AI is not infallible and that AI-powered solutions can reflect racial bias.
Schabacker outlines a number of concerns about AI and its use, including whether the algorithms were tested on diverse populations or focused primarily on white men. AI models reflect the quality of the data they use, so they can be biased toward certain population groups, he says.
“If you have people who don’t fit into that subset, you’re going to get very wrong results,” he says.
Schabacker expressed concern about the lack of Food and Drug Administration regulation of AI tools. He said developers typically describe AI-powered solutions as “decision support” tools, which reduces FDA scrutiny.
Schabacker says that’s concerning because more doctors, especially overworked ones, will be using AI tools to aid in diagnosis.
he asks: “Is it really just a decision-making aid?” Will the final decision be made by the doctor? ”
“We’re very concerned about these decision-support tools becoming actual decision-making tools,” Schabacker said. “And they certainly aren’t designed or regulated for that.”
Schabacker points out that another important innovation in health care 15 years ago, electronic medical records, “didn’t really work.” Originally designed as a billing solution, electronic health records have now become a widespread workforce tool in healthcare settings.
“Let’s not make the same mistake we made with EMR and apply it to everything in general,” Schabacker says.
His message to policymakers is: Don’t go any further back. ”
“Get the right people together and think about what we need to do to regulate this,” he says. “I’m not saying AI is bad. I think it can help a lot. But it has to be done correctly. Understand what’s going into specific guidelines, design principles, and algorithms. How do we test it? What populations and biases might it involve? And how do we address it? So how do we continually test it? Do we need a guarantee like that, a quality guarantee?”
“The more safety features and principles you can design, the less you have to modify and test later,” he says. “So we’re calling on regulators to get a lot more involved here.”
Schabacker also has some words of warning for the medical industry.
“Don’t let guys in garages develop things like that,” he says. “Make sure you have the right processes in place, ensure you have the relevant medical expertise as an input, and make sure it’s not just one or two medical advisors. So there’s a lot of work to be done here. But , unfortunately…we’re already behind the eight ball.”