TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Fastformer: Additive Attention Can Be All You Need

Fastformer: Additive Attention Can Be All You Need

Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang, Xing Xie

2021-08-20Text ClassificationNews RecommendationText SummarizationAll
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCodeCode

Abstract

Transformer is a powerful model for text understanding. However, it is inefficient due to its quadratic complexity to input sequence length. Although there are many methods on Transformer acceleration, they are still either inefficient on long sequences or not effective enough. In this paper, we propose Fastformer, which is an efficient Transformer model based on additive attention. In Fastformer, instead of modeling the pair-wise interactions between tokens, we first use additive attention mechanism to model global contexts, and then further transform each token representation based on its interaction with global context representations. In this way, Fastformer can achieve effective context modeling with linear complexity. Extensive experiments on five datasets show that Fastformer is much more efficient than many existing Transformer models and can meanwhile achieve comparable or even better long text modeling performance.

Results

TaskDatasetMetricValueModel
Text SummarizationPubmedROUGE-138.09Fastformer
Text SummarizationPubmedROUGE-215.44Fastformer
Text SummarizationPubmedROUGE-L34.81Fastformer
Text SummarizationCNN / Daily Mail (Anonymized)ROUGE-138.54Fastformer
Text SummarizationCNN / Daily Mail (Anonymized)ROUGE-216.22Fastformer
Text SummarizationCNN / Daily Mail (Anonymized)ROUGE-L36.21Fastformer

Related Papers

IP2: Entity-Guided Interest Probing for Personalized News Recommendation2025-07-18Making Language Model a Hierarchical Classifier and Generator2025-07-17LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15Modeling Code: Is Text All You Need?2025-07-15All Eyes, no IMU: Learning Flight Attitude from Vision Alone2025-07-15GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10Is Diversity All You Need for Scalable Robotic Manipulation?2025-07-08DESIGN AND IMPLEMENTATION OF ONLINE CLEARANCE REPORT.2025-07-07