TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Surprisingly Robust Trick for Winograd Schema Challenge

A Surprisingly Robust Trick for Winograd Schema Challenge

Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz

2019-05-15Coreference ResolutionNatural Language InferenceCommon Sense ReasoningNatural Language UnderstandingLanguage Modelling
PaperPDFCode(official)Code

Abstract

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the "complex" subsets of WSC273, introduced by Trichelair et al. (2018).

Results

TaskDatasetMetricValueModel
Natural Language InferenceWNLIAccuracy74.7BERTwiki 340M (fine-tuned on WSCR)
Natural Language InferenceWNLIAccuracy71.9BERT-large 340M (fine-tuned on WSCR)
Natural Language InferenceWNLIAccuracy70.5BERT-base 110M (fine-tuned on WSCR)
Coreference ResolutionWinograd Schema ChallengeAccuracy72.5BERTwiki 340M (fine-tuned on WSCR)
Coreference ResolutionWinograd Schema ChallengeAccuracy71.4BERT-large 340M (fine-tuned on WSCR)
Coreference ResolutionWinograd Schema ChallengeAccuracy70.3BERTwiki 340M (fine-tuned on half of WSCR)
Coreference ResolutionWinograd Schema ChallengeAccuracy62.3BERT-base 110M (fine-tuned on WSCR)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16