TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Large Language Models Encode Clinical Knowledge

Large Language Models Encode Clinical Knowledge

Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Scharli, Aakanksha Chowdhery, Philip Mansfield, Blaise Aguera y Arcas, Dale Webster, Greg S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad Tomasev, Yun Liu, Alvin Rajkomar, Joelle Barral, Christopher Semturs, Alan Karthikesalingam, Vivek Natarajan

2022-12-26Clinical KnowledgeQuestion AnsweringNatural Language UnderstandingOpen-Ended Question AnsweringMMLUMultiple-choiceMultiple Choice Question Answering (MCQA)
PaperPDFCode

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.

Results

TaskDatasetMetricValueModel
Question AnsweringPubMedQAAccuracy79Flan-PaLM (540B, Few-shot)
Question AnsweringPubMedQAAccuracy77.2Flan-PaLM (62B, Few-shot)
Question AnsweringPubMedQAAccuracy75.2Flan-PaLM (540B, SC)
Question AnsweringPubMedQAAccuracy67.6Flan-PaLM (8B, Few-shot)
Question AnsweringPubMedQAAccuracy57.8PaLM (62B, Few-shot)
Question AnsweringPubMedQAAccuracy55PaLM (540B, Few-shot)
Question AnsweringPubMedQAAccuracy34PaLM (8B, Few-shot)
Question AnsweringMedQAAccuracy67.6Flan-PaLM (540 B)
Question AnsweringMedQAAccuracy50.3PubMedGPT (2.7 B)
Question AnsweringMedQAAccuracy45.1BioLinkBERT (340 M)
Question AnsweringMedQAAccuracy33.3GPT-Neo (2.7 B)
Question AnsweringMedMCQADev Set (Acc-%)0.576Flan-PaLM (540B, SC)
Question AnsweringMedMCQADev Set (Acc-%)0.565Flan-PaLM (540B, Few-shot)
Question AnsweringMedMCQADev Set (Acc-%)0.545PaLM (540B, Few-shot)
Question AnsweringMedMCQADev Set (Acc-%)0.536Flan-PaLM (540B, CoT)
Question AnsweringMedMCQADev Set (Acc-%)0.462Flan-PaLM (62B, Few-shot)
Question AnsweringMedMCQADev Set (Acc-%)0.434PaLM (62B, Few-shot)
Question AnsweringMedMCQADev Set (Acc-%)0.345Flan-PaLM (8B, Few-shot)
Question AnsweringMedMCQADev Set (Acc-%)0.267PaLM (8B, Few-shot)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17HATS: Hindi Analogy Test Set for Evaluating Reasoning in Large Language Models2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16