TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/UNICORN on RAINBOW: A Universal Commonsense Reasoning Mode...

UNICORN on RAINBOW: A Universal Commonsense Reasoning Model on a New Multitask Benchmark

Nicholas Lourie, Ronan Le Bras, Chandra Bhagavatula, Yejin Choi

2021-03-24WinograndeQuestion AnsweringKnowledge GraphsSentence CompletionCommon Sense ReasoningTransfer Learning
PaperPDFCode(official)

Abstract

Commonsense AI has long been seen as a near impossible goal -- until recently. Now, research interest has sharply increased with an influx of new benchmarks and models. We propose two new ways to evaluate commonsense models, emphasizing their generality on new tasks and building on diverse, recently introduced benchmarks. First, we propose a new multitask benchmark, RAINBOW, to promote research on commonsense models that generalize well over multiple tasks and datasets. Second, we propose a novel evaluation, the cost equivalent curve, that sheds new insight on how the choice of source datasets, pretrained language models, and transfer learning methods impacts performance and data efficiency. We perform extensive experiments -- over 200 experiments encompassing 4800 models -- and report multiple valuable and sometimes surprising findings, e.g., that transfer almost always leads to better or equivalent performance if following a particular recipe, that QA-based commonsense datasets transfer well with each other, while commonsense knowledge graphs do not, and that perhaps counter-intuitively, larger models benefit more from transfer than smaller ones. Last but not least, we introduce a new universal commonsense reasoning model, UNICORN, that establishes new state-of-the-art performance across 8 popular commonsense benchmarks, aNLI (87.3%), CosmosQA (91.8%), HellaSWAG (93.9%), PIQA (90.1%), SocialIQa (83.2%), WinoGrande (86.6%), CycIC (94.0%) and CommonsenseQA (79.3%).

Results

TaskDatasetMetricValueModel
Question AnsweringSIQAAccuracy83.2Unicorn 11B (fine-tuned)
Question AnsweringPIQAAccuracy90.1Unicorn 11B (fine-tuned)
Common Sense ReasoningWinoGrandeAccuracy91.3Unicorn 11B (fine-tuned)
Common Sense ReasoningCommonsenseQAAccuracy79.3Unicorn 11B (fine-tuned)
Sentence CompletionHellaSwagAccuracy93.9Unicorn 11B (fine-tuned)

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17