TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Hungry Hungry Hippos: Towards Language Modeling with State...

Hungry Hungry Hippos: Towards Language Modeling with State Space Models

Daniel Y. Fu, Tri Dao, Khaled K. Saab, Armin W. Thomas, Atri Rudra, Christopher Ré

2022-12-28Question AnsweringFew-Shot LearningCoreference ResolutionNatural Language InferenceLong-range modelingWord Sense DisambiguationLanguage Modelling
PaperPDFCodeCodeCode(official)

Abstract

State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 2.4$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 2.7B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.

Results

TaskDatasetMetricValueModel
Question AnsweringCOPAAccuracy81Hybrid H3 2.7B (0-shot, logit scoring)
Question AnsweringCOPAAccuracy77Hybrid H3 2.7B (3-shot, logit scoring)
Question AnsweringCOPAAccuracy67Hybrid H3 125M (0-shot, logit scoring)
Question AnsweringCOPAAccuracy67Hybrid H3 125M (0-shot, rank classification)
Question AnsweringCOPAAccuracy51H3 125M (0-shot, rank classification)
Question AnsweringMultiRCEM59.7Hybrid H3 355M (3-shot, logit scoring)
Question AnsweringMultiRCEM59.5Hybrid H3 355M (0-shot, logit scoring)
Question AnsweringMultiRCEM51.4Hybrid H3 125M (0-shot, logit scoring)
Question AnsweringMultiRCEM48.9Hybrid H3 125M (3-shot, logit scoring)
Question AnsweringBoolQAccuracy61.7Hybrid H3 1.3B (0-shot, logit scoring)
Question AnsweringBoolQAccuracy60.6Hybrid H3 2.7B (3-shot, logit scoring)
Question AnsweringBoolQAccuracy59.6Hybrid H3 125M (0-shot, logit scoring)
Question AnsweringBoolQAccuracy56.1Hybrid H3 125M (3-shot, logit scoring)
Question AnsweringBoolQAccuracy56.1Hybrid H3 125M (3-shot, rank classification)
Word Sense DisambiguationWords in ContextAccuracy51.4Hybrid H3 125M (0-shot, logit scoring)
Word Sense DisambiguationWords in ContextAccuracy51.4Hybrid H3 125M (0-shot, rank classification)
Word Sense DisambiguationWords in ContextAccuracy49.1Hybrid H3 125M (3-shot, logit scoring)
Language ModellingWikiText-103Test perplexity10.6Hybrid H3 (2.7B)
Language ModellingWikiText-103Test perplexity12.5Hybrid H3 (1.3B)
Language ModellingWikiText-103Test perplexity16.9Hybrid H3 (355M)
Language ModellingWikiText-103Test perplexity18.5Hybrid H3 125M
Language ModellingWikiText-103Test perplexity23.7Hybrid H3 (125M)
Language ModellingThe PileTest perplexity10.2Hybrid H3 125M
Language ModellingThe PileTest perplexity10.7Transformer 125M
Coreference ResolutionWinograd Schema ChallengeAccuracy63.5H3 125M (3-shot, rank classification)
Coreference ResolutionWinograd Schema ChallengeAccuracy61.5H3 125M (0-shot, rank classification)
Coreference ResolutionWinograd Schema ChallengeAccuracy43.3Hybrid H3 125M (3-shot, logit scoring)

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17GLAD: Generalizable Tuning for Vision-Language Models2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17