Training AI for Healthcare focusing on ‘First, Do No Harm’ led by Munjal Shah.

Munjal Shah, founder of medical AI startup Hippocratic AI, aims to harness the power of large language models (LLMs) to provide crucial but non-diagnostic healthcare services without putting patients at risk. LLMs like ChatGPT have demonstrated an impressive ability to understand questions, generate original responses, and communicate information conversationally. Shah sees an opportunity to apply these skills to alleviate healthcare staffing shortages and improve patient outcomes through better communication and guidance.

However, he stresses the need to carefully define the parameters of how AI is applied in healthcare based on the physician’s oath to “first, do no harm.” Hippocratic AI focuses specifically on leveraging AI for ancillary care tasks like chronic disease management, care navigation, diet/lifestyle recommendations, and patient communication. The AI will not make diagnoses, determine treatment plans, or provide information that could lead to patient harm if inaccurate.

Shah points out that a significant barrier to effective patient care is the need for more time clinicians can dedicate to understanding patient needs and concerns. AI systems can help bridge communication gaps by providing personalized explanations, reminders, and follow-ups that human healthcare workers often lack bandwidth for. Surprisingly, early research even suggests AI responses can demonstrate more empathy than overburdened physicians in brief patient interactions.

However, Shah emphasizes that achieving safe, high-quality AI guidance requires extensive specialized training on validated medical information over general internet content. Hippocratic AI prioritizes evidence-based data like peer-reviewed research and standards of care documents over pre-trained LLMs’ typical broad content. They also refine the AI’s knowledge via reinforcement learning from healthcare professional feedback.

So far, testing shows Hippocratic AI has outpaced other leading LLMs, including GPT-4, on relevant medical exams and bedside manner assessments. While cautious about extrapolating these results too far, Shah sees preliminary solid evidence that specialized healthcare LLMs could effectively provide crucial ancillary services at scale.

In the long term, successfully demonstrating AI’s capabilities to deliver specific supportive care could open the door to alleviating clinician burnout and capacity limitations across global healthcare systems. But Shah maintains that AI must adhere strictly to the maxim of “first, not harm” by avoiding high-risk recommendations or responsibilities requiring complex medical judgment. With the right human-centered training approach focused on building healthcare LLMs’ communication skills over diagnostic abilities, companies like Hippocratic AI hope to expand access to empathetic guidance without compromising patient safety.

Written by