TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Human Parity on CommonsenseQA: Augmenting Self-Attention w...

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang

2021-12-06Common Sense Reasoning
PaperPDFCodeCode(official)

Abstract

Most of today's AI systems focus on using self-attention mechanisms and transformer architectures on large amounts of diverse data to achieve impressive performance gains. In this paper, we propose to augment the transformer architecture with an external attention mechanism to bring external knowledge and context to bear. By integrating external information into the prediction process, we hope to reduce the need for ever-larger models and increase the democratization of AI systems. We find that the proposed external attention mechanism can significantly improve the performance of existing AI systems, allowing practitioners to easily customize foundation AI models to many diverse downstream applications. In particular, we focus on the task of Commonsense Reasoning, demonstrating that the proposed external attention mechanism can augment existing transformer models and significantly improve the model's reasoning capabilities. The proposed system, Knowledgeable External Attention for commonsense Reasoning (KEAR), reaches human parity on the open CommonsenseQA research benchmark with an accuracy of 89.4\% in comparison to the human accuracy of 88.9\%.

Results

TaskDatasetMetricValueModel
Common Sense ReasoningCommonsenseQAAccuracy91.2DeBERTaV3-large+KEAR
Common Sense ReasoningCommonsenseQAAccuracy89.4KEAR
Common Sense ReasoningCommonsenseQAAccuracy73GPT-3 Direct Finetuned

Related Papers

Comparing Apples to Oranges: A Dataset & Analysis of LLM Humour Understanding from Traditional Puns to Topical Jokes2025-07-17LoSiA: Efficient High-Rank Fine-Tuning via Subnet Localization and Optimization2025-07-06EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits2025-06-11CheckManual: A New Challenge and Benchmark for Manual-based Appliance Manipulation2025-06-11Prime the search: Using large language models for guiding geometric task and motion planning by warm-starting tree search2025-06-08AmbiK: Dataset of Ambiguous Tasks in Kitchen Environment2025-06-04ATLAS: Learning to Optimally Memorize the Context at Test Time2025-05-29Spatial Knowledge Graph-Guided Multimodal Synthesis2025-05-28