TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Adaptively Sparse Transformers

Adaptively Sparse Transformers

Gonçalo M. Correia, Vlad Niculae, André F. T. Martins

2019-08-30IJCNLP 2019 11Machine TranslationTranslation
PaperPDFCodeCodeCode(official)

Abstract

Attention mechanisms have become ubiquitous in NLP. Recent architectures, notably the Transformer, learn powerful context-aware word representations through layered, multi-headed attention. The multiple heads learn diverse types of word relationships. However, with standard softmax attention, all attention heads are dense, assigning a non-zero weight to all context words. In this work, we introduce the adaptively sparse Transformer, wherein attention heads have flexible, context-dependent sparsity patterns. This sparsity is accomplished by replacing softmax with $\alpha$-entmax: a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight. Moreover, we derive a method to automatically learn the $\alpha$ parameter -- which controls the shape and sparsity of $\alpha$-entmax -- allowing attention heads to choose between focused or spread-out behavior. Our adaptively sparse Transformer improves interpretability and head diversity when compared to softmax Transformers on machine translation datasets. Findings of the quantitative and qualitative analysis of our approach include that heads in different layers learn different sparsity preferences and tend to be more diverse in their attention distributions than softmax Transformers. Furthermore, at no cost in accuracy, sparsity in attention heads helps to uncover different head specializations.

Results

TaskDatasetMetricValueModel
Machine TranslationWMT2016 Romanian-EnglishBLEU score33.1Adaptively Sparse Transformer (1.5-entmax)
Machine TranslationWMT2016 Romanian-EnglishBLEU score32.89Adaptively Sparse Transformer (alpha-entmax)
Machine TranslationIWSLT2017 German-EnglishBLEU score29.9Adaptively Sparse Transformer (alpha-entmax)
Machine TranslationIWSLT2017 German-EnglishBLEU score29.83Adaptively Sparse Transformer (1.5-entmax)
Machine TranslationWMT2014 English-GermanBLEU score26.93Adaptively Sparse Transformer (alpha-entmax)
Machine TranslationWMT2014 English-GermanBLEU score25.89Adaptively Sparse Transformer (1.5-entmax)

Related Papers

A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17Function-to-Style Guidance of LLMs for Code Translation2025-07-15Speak2Sign3D: A Multi-modal Pipeline for English Speech to American Sign Language Animation2025-07-09Pun Intended: Multi-Agent Translation of Wordplay with Contrastive Learning and Phonetic-Semantic Embeddings2025-07-09Unconditional Diffusion for Generative Sequential Recommendation2025-07-08GRAFT: A Graph-based Flow-aware Agentic Framework for Document-level Machine Translation2025-07-04TransLaw: Benchmarking Large Language Models in Multi-Agent Simulation of the Collaborative Translation2025-07-01CycleVAR: Repurposing Autoregressive Model for Unsupervised One-Step Image Translation2025-06-29