TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Towards Expert-Level Medical Question Answering with Large...

Towards Expert-Level Medical Question Answering with Large Language Models

Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, Mike Schaekermann, Amy Wang, Mohamed Amin, Sami Lachgar, Philip Mansfield, Sushant Prakash, Bradley Green, Ewa Dominowska, Blaise Aguera y Arcas, Nenad Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster, Greg S. Corrado, Yossi Matias, Shekoofeh Azizi, Alan Karthikesalingam, Vivek Natarajan

2023-05-16Question AnsweringProtein FoldingMMLUMultiple Choice Question Answering (MCQA)
PaperPDFCode

Abstract

Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.

Results

TaskDatasetMetricValueModel
Question AnsweringPubMedQAAccuracy79.2Med-PaLM 2 (5-shot)
Question AnsweringPubMedQAAccuracy75Med-PaLM 2 (ER)
Question AnsweringPubMedQAAccuracy74Med-PaLM 2 (CoT + SC)
Question AnsweringMedQAAccuracy85.4Med-PaLM 2
Question AnsweringMedQAAccuracy83.7Med-PaLM 2 (CoT + SC)
Question AnsweringMedQAAccuracy79.7Med-PaLM 2 (5-shot)
Question AnsweringMMLU (Clinical Knowledge)Accuracy88.7Med-PaLM 2 (ER)
Question AnsweringMMLU (Clinical Knowledge)Accuracy88.3Med-PaLM 2 (5-shot)
Question AnsweringMMLU (Clinical Knowledge)Accuracy88.3Med-PaLM 2 (CoT + SC)
Question AnsweringMMLU (College Biology)Accuracy95.8Med-PaLM 2 (ER)
Question AnsweringMMLU (College Biology)Accuracy95.1Med-PaLM 2 (CoT + SC)
Question AnsweringMMLU (College Biology)Accuracy94.4Med-PaLM 2 (5-shot)
Question AnsweringMMLU (Professional medicine)Accuracy95.2Med-PaLM 2 (5-shot)
Question AnsweringMMLU (Professional medicine)Accuracy93.4Med-PaLM 2 (CoT + SC)
Question AnsweringMMLU (Professional medicine)Accuracy92.3Med-PaLM 2 (ER)
Question AnsweringMMLU (Medical Genetics)Accuracy92Med-PaLM 2 (ER)
Question AnsweringMMLU (Medical Genetics)Accuracy90Med-PaLM 2 (5-shot)
Question AnsweringMMLU (Medical Genetics)Accuracy89Med-PaLM 2 (CoT + SC)
Question AnsweringMedMCQATest Set (Acc-%)0.723Med-PaLM 2 (ER)
Question AnsweringMedMCQATest Set (Acc-%)0.715Med-PaLM 2 (CoT+SC)
Question AnsweringMedMCQATest Set (Acc-%)0.713Med-PaLM 2 (5-shot)
Question AnsweringMMLU (Anatomy)Accuracy84.4Med-PaLM 2 (ER)
Question AnsweringMMLU (Anatomy)Accuracy80Med-PaLM 2 (CoT + SC)
Question AnsweringMMLU (Anatomy)Accuracy77.8Med-PaLM 2 (5-shot)
Question AnsweringMMLU (College Medicine)Accuracy83.2Med-PaLM (ER)
Question AnsweringMMLU (College Medicine)Accuracy81.5Med-PaLM (CoT + SC)
Question AnsweringMMLU (College Medicine)Accuracy80.9Med-PaLM 2 (5-shot)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Learning What Matters: Probabilistic Task Selection via Mutual Information for Model Finetuning2025-07-16Step-wise Policy for Rare-tool Knowledge (SPaRK): Offline RL that Drives Diverse Tool Use in LLMs2025-07-15