TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Big Bird: Transformers for Longer Sequences

Big Bird: Transformers for Longer Sequences

Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed

2020-07-28NeurIPS 2020 12Text ClassificationQuestion AnsweringText SummarizationNatural Language InferenceSemantic Textual SimilarityLinguistic Acceptability
PaperPDFCodeCodeCodeCode(official)CodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having $O(1)$ global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data.

Results

TaskDatasetMetricValueModel
Question AnsweringHotpotQAANS-F10.755BigBird-etc
Question AnsweringHotpotQAJOINT-F10.736BigBird-etc
Question AnsweringHotpotQASUP-F10.891BigBird-etc
Question AnsweringTriviaQAF180.9BigBird-etc
Question AnsweringWikiHopTest82.3BigBird-etc
Natural Language InferenceMultiNLIMatched87.5BigBird
Semantic Textual SimilarityMRPCF191.5BigBird
Semantic Textual SimilaritySTS BenchmarkSpearman Correlation0.878BigBird
Sentiment AnalysisSST-2 Binary classificationAccuracy94.6BigBird
Text SummarizationBigPatentROUGE-160.64BigBird-Pegasus
Text SummarizationBigPatentROUGE-242.46BigBird-Pegasus
Text SummarizationBigPatentROUGE-L50.01BigBird-Pegasus
Text SummarizationarXivROUGE-146.63BigBird-Pegasus
Text SummarizationarXivROUGE-219.02BigBird-Pegasus
Text SummarizationarXivROUGE-L41.77BigBird-Pegasus
Text SummarizationPubmedROUGE-146.32BigBird-Pegasus
Text SummarizationPubmedROUGE-220.65BigBird-Pegasus
Text SummarizationPubmedROUGE-L42.33BigBird-Pegasus
Text SummarizationBBC XSumROUGE-147.12BigBird-Pegasus
Text SummarizationBBC XSumROUGE-224.05BigBird-Pegasus
Text SummarizationBBC XSumROUGE-L38.8BigBird-Pegasus
Text SummarizationCNN / Daily MailROUGE-143.84BigBird-Pegasus
Text SummarizationCNN / Daily MailROUGE-221.11BigBird-Pegasus
Text SummarizationCNN / Daily MailROUGE-L40.74BigBird-Pegasus
Text ClassificationHyperpartisanAccuracy92.2BigBird
Text ClassificationArxiv HEP-TH citation graphAccuracy92.31BigBird
Text ClassificationPatentsAccuracy69.3BigBird
Document SummarizationBBC XSumROUGE-147.12BigBird-Pegasus
Document SummarizationBBC XSumROUGE-224.05BigBird-Pegasus
Document SummarizationBBC XSumROUGE-L38.8BigBird-Pegasus
Document SummarizationCNN / Daily MailROUGE-143.84BigBird-Pegasus
Document SummarizationCNN / Daily MailROUGE-221.11BigBird-Pegasus
Document SummarizationCNN / Daily MailROUGE-L40.74BigBird-Pegasus
ClassificationHyperpartisanAccuracy92.2BigBird
ClassificationArxiv HEP-TH citation graphAccuracy92.31BigBird
ClassificationPatentsAccuracy69.3BigBird

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16