TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/When Attention Meets Fast Recurrence: Training Language Mo...

When Attention Meets Fast Recurrence: Training Language Models with Reduced Compute

Tao Lei

2021-02-24EMNLP 2021 11Machine TranslationLanguage Modelling
PaperPDFCode(official)

Abstract

Large language models have become increasingly difficult to train because of the growing computation time and cost. In this work, we present SRU++, a highly-efficient architecture that combines fast recurrence and attention for sequence modeling. SRU++ exhibits strong modeling capacity and training efficiency. On standard language modeling tasks such as Enwik8, Wiki-103 and Billion Word datasets, our model obtains better bits-per-character and perplexity while using 3x-10x less training cost compared to top-performing Transformer models. For instance, our model achieves a state-of-the-art result on the Enwik8 dataset using 1.6 days of training on an 8-GPU machine. We further demonstrate that SRU++ requires minimal attention for near state-of-the-art performance. Our results suggest jointly leveraging fast recurrence with little attention as a promising direction for accelerating model training and inference.

Results

TaskDatasetMetricValueModel
Language ModellingWikiText-103Test perplexity17.1SRU++ Large
Language ModellingWikiText-103Validation perplexity16.4SRU++ Large
Language ModellingWikiText-103Test perplexity18.3SRU++ Base
Language ModellingWikiText-103Validation perplexity17.5SRU++ Base
Language ModellingOne Billion WordPPL23.5SRU++ Large
Language ModellingOne Billion WordPPL25.1SRU++
Language Modellingenwik8Bit per Character (BPC)0.95SRU++ Large
Language Modellingenwik8Bit per Character (BPC)0.97SRU++ Base

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16