Skip to main content
Despite numerous studies demonstrating the impressive capabilities of generative AI, its adoption in healthcare faces skepticism and regulatory hurdles. During DayOne’s OPEN MIC Next in Health Series, three experts explored the trade-offs between the ability to assist physicians and the risks associated with hallucinations and bias. 

In healthcare, generative AI (gen AI) promises to democratize access to medical services and information, improve access to data from clinical studies and research papers, take over mundane administrative tasks, or help patients navigate complex health- and prevention-related knowledge.

Most humans in the world lack access to high-quality healthcare, often struggling to find healthcare facilities within a reasonable distance. One of the most profound AI-driven transformations will be the democratization of medicine-specific knowledge to billions of people globally.

Jaspal DhalliwalGlobal Principal Cloud Architect, Google

With AI tools available straight from smartphones, individuals will reach a high level of healthcare proficiency, fundamentally changing our approach to healthcare. However it is physicians who stand to gain the most from AI advancements.

Will AI save the reputation of EHRs?

One of the most anticipated applications of AI involves electronic health records. “The current EHR systems require a lot of typing, clicking, and scrolling, which consumes almost half of physicians’ time. Imagine this manual process disappears – this could be a game-changer in the history of healthcare and digitalization,” continued Blaise Jacholkowski, Senior Business Solution Manager of Digital Health at Zühlke, who moderated the discussion.

“There’s a wealth of knowledge in there that could be extracted for the benefit of patients, allowing predictions about their long-term health,” highlights Lisa Falco, Lead Data & AI Consultant and Femtech Expert at Zühlke. Since AI can work with unstructured data, new possibilities exist to generate even more exciting health insights.

Gen AI has already made its way into healthcare. In radiology, AI assists radiologists in analyzing thousands of images, while generative AI provides assistive information to reduce fatigue-related human error. Companies like Bayer, Siemens, and GE are actively incorporating these technologies.

“The assistive AI doesn’t deliver love poems but offers relevant insights, creating a more collaborative environment between AI and healthcare professionals,” says Dhaliwal, reminding that generative AI comes with many still unsolved challenges. New AI models must be validated to be sure they offer not only reliable but also ethical responses. The logic in decision-making needs to be mimicked within the generative AI ecosystem in a trusted way. “It only takes one false answer or hallucination, and the doctor may lose trust in the system.”

A solution could be task-specific large language models. According to Dhalliwal, within six months, a new large language model or generative AI “will be released literally every day.” Besides, gen AI can understand and clean up the data stored in EHR and even transform unstructured data into common standards like FHIR. This is another practical application of AI – making messy data collected for decades using different formats ‘understandable’ for Big Data analysis.”

Does AI present the world in a distorting mirror?

As a Femtech Expert, Lisa Falco knows the danger in gen AI: Bias in – Bias out. AI is known to hallucinate and sometimes provide incorrect answers. The issue is compounded by the data gap – clinical studies have primarily focused on men. This bias is inherent, reflecting societal biases in data since the 1950s. “Thus, AI should come with a warning about potential limitations, educating users on how to approach output information critically,” suggests Falco. Decision-making in healthcare must remain uniquely human, requiring human judgment and experience.

“The difference between a perception-based model and one anchored in medical facts is crucial, but human interpretation remains essential. Despite advancements in AI, human reasoning and the ability to analyze complex medical scenarios are areas that still require refinement in future models,” according to Dhalliwal. We should perceive AI as a tool, not as physicians’ substitute. Skepticism is understandable, as this is a very new technology that needs time to build trust among users. “Even pilots panic using autopilot, but they’ve learned to adapt.”

We won’t see medical devices with “gen AI inside”

Will we ever build medical-grade gen AI systems adhering to rigorous criteria for medical devices?

“It’s possible if we either make AI models explainable and transparent or train them on good-quality data to perform a specific task, keeping humans in the loop,” says Falco. Dhalliwal sees generative AI not necessarily in the device itself but instead in the outputs of the device. “If I’m getting a data stream from a medical device that is strictly regulated, I can gain insights and drive analytics. That would probably be one of the first areas where we start to see regulators and the industry coming together around that telemetry.”

An example is a tool for diagnosing diabetic retinopathy using your smartphone, developed by Google. The smartphone is not a medical-grade device. Yet, it’s being used to make a first-pass diagnosis.”

How to balance a liberal and a risk-averse approach in the medical industry

The accuracy of any tools used in healthcare, including AI, is essential because they impact patients’ health. We must have guardrails and regulations to ensure transparency and the high quality of AI-based products.

Open source models allow anyone to train anything, making regulations crucial for safe application. While regulations may seem to hinder AI development, they are necessary to validate and ensure the accuracy of AI solutions, especially in medicine.

Lisa FalcoLead Data & AI Consultant and Femtech Expert, Zühlke

“There is a need for anchoring AI with rules and regulations that align with foundational medical knowledge. This approach ensures that AI adheres to specific compliance regulations and respects the sovereignty of each market,” concludes Dhalliwal. 

The discussion within OPEN MIC Next in Health Series shows that every answer about gen AI in healthcare generates the following questions, some of which will be challenging to address due to the rapidly evolving nature of AI and the breakthroughs expected soon. We must focus on how AI can help solve healthcare’s most significant problems. The benefits of using AI in medicine outweigh the threats, and the result of this equation should guide us like a beacon. 

 

Did you miss “OPEN MIC Next in Health – Generative AI in Healthcare: Hype or Hope?” Click below to watch the video.