TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/WinoGrande: An Adversarial Winograd Schema Challenge at Sc...

WinoGrande: An Adversarial Winograd Schema Challenge at Scale

Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

2019-07-24WinograndeQuestion AnsweringCoreference ResolutionCommon Sense ReasoningTransfer Learning
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern 2011), a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun resolution problems originally designed to be unsolvable for statistical models that rely on selectional preferences or word associations. However, recent advances in neural language models have already reached around 90% accuracy on variants of WSC. This raises an important question whether these models have truly acquired robust commonsense capabilities or whether they rely on spurious biases in the datasets that lead to an overestimation of the true capabilities of machine commonsense. To investigate this question, we introduce WinoGrande, a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations. The best state-of-the-art methods on WinoGrande achieve 59.4-79.1%, which are 15-35% below human performance of 94.0%, depending on the amount of the training data allowed. Furthermore, we establish new state-of-the-art results on five related benchmarks - WSC (90.1%), DPR (93.1%), COPA (90.6%), KnowRef (85.6%), and Winogender (97.1%). These results have dual implications: on one hand, they demonstrate the effectiveness of WinoGrande when used as a resource for transfer learning. On the other hand, they raise a concern that we are likely to be overestimating the true capabilities of machine commonsense across all these benchmarks. We emphasize the importance of algorithmic bias reduction in existing and future benchmarks to mitigate such overestimation.

Results

TaskDatasetMetricValueModel
Question AnsweringCOPAAccuracy90.6RoBERTa-Winogrande-ft 355M (fine-tuned)
Question AnsweringCOPAAccuracy86.4RoBERTa-ft 355M (fine-tuned)
Question AnsweringCOPAAccuracy84.4RoBERTa-Winogrande 355M (fine-tuned)
Question AnsweringCOPAAccuracy76.4Causal Strength w/multi-word predicates (presumably on WinoGrande?)
Question AnsweringCOPAAccuracy65.4Pointwise Mutual Information (on 10M stories)
Common Sense ReasoningWinoGrandeAccuracy79.1RoBERTa-Winogrande 355M (fine-tuned)
Common Sense ReasoningWinoGrandeAccuracy64.9BERT-Winogrande 345M (fine-tuned)
Common Sense ReasoningWinoGrandeAccuracy58.9RoBERTa-DPR 355M (0-shot)
Common Sense ReasoningWinoGrandeAccuracy51.9BERT-large 345M (0-shot)
Common Sense ReasoningWinoGrandeAccuracy51BERT-DPR 345M (0-shot)
Common Sense ReasoningWinoGrandeAccuracy50RoBERTa-large 355M (0-shot)
Coreference ResolutionWinograd Schema ChallengeAccuracy90.1RoBERTa-WinoGrande 355M
Coreference ResolutionWinograd Schema ChallengeAccuracy83.1RoBERTa-DPR 355M
Coreference ResolutionWinograd Schema ChallengeAccuracy57.1WKH
Coreference ResolutionWinograd Schema ChallengeAccuracy52.8KEE+NKAM on WinoGrande

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16