TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TruthfulQA: Measuring How Models Mimic Human Falsehoods

TruthfulQA: Measuring How Models Mimic Human Falsehoods

Stephanie Lin, Jacob Hilton, Owain Evans

2021-09-08ACL 2022 5Question AnsweringTruthfulQAQuestion GenerationLanguage ModellingMisconceptions
PaperPDFCodeCodeCode(official)

Abstract

We propose a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. We crafted questions that some humans would answer falsely due to a false belief or misconception. To perform well, models must avoid generating false answers learned from imitating human texts. We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web.

Results

TaskDatasetMetricValueModel
Question AnsweringTruthfulQA% info89.84GPT-2 1.5B
Question AnsweringTruthfulQA% true29.5GPT-2 1.5B
Question AnsweringTruthfulQA% true (GPT-judge)29.87GPT-2 1.5B
Question AnsweringTruthfulQABLEU-4.91GPT-2 1.5B
Question AnsweringTruthfulQABLEURT-0.25GPT-2 1.5B
Question AnsweringTruthfulQAMC10.22GPT-2 1.5B
Question AnsweringTruthfulQAMC20.39GPT-2 1.5B
Question AnsweringTruthfulQAROUGE-9.41GPT-2 1.5B
Question AnsweringTruthfulQA% info97.55GPT-3 175B
Question AnsweringTruthfulQA% true20.44GPT-3 175B
Question AnsweringTruthfulQA% true (GPT-judge)20.56GPT-3 175B
Question AnsweringTruthfulQABLEU-17.38GPT-3 175B
Question AnsweringTruthfulQABLEURT-0.56GPT-3 175B
Question AnsweringTruthfulQAMC10.21GPT-3 175B
Question AnsweringTruthfulQAMC20.33GPT-3 175B
Question AnsweringTruthfulQAROUGE-17.75GPT-3 175B
Question AnsweringTruthfulQA% info89.96GPT-J 6B
Question AnsweringTruthfulQA% true26.68GPT-J 6B
Question AnsweringTruthfulQA% true (GPT-judge)27.17GPT-J 6B
Question AnsweringTruthfulQABLEU-7.58GPT-J 6B
Question AnsweringTruthfulQABLEURT-0.31GPT-J 6B
Question AnsweringTruthfulQAMC10.2GPT-J 6B
Question AnsweringTruthfulQAMC20.36GPT-J 6B
Question AnsweringTruthfulQAROUGE-11.35GPT-J 6B
Question AnsweringTruthfulQA% info64.5UnifiedQA 3B
Question AnsweringTruthfulQA% true53.86UnifiedQA 3B
Question AnsweringTruthfulQA% true (GPT-judge)53.24UnifiedQA 3B
Question AnsweringTruthfulQABLEU-0.16UnifiedQA 3B
Question AnsweringTruthfulQABLEURT0.08UnifiedQA 3B
Question AnsweringTruthfulQAMC10.19UnifiedQA 3B
Question AnsweringTruthfulQAMC20.35UnifiedQA 3B
Question AnsweringTruthfulQAROUGE1.76UnifiedQA 3B

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17