TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Investigating Efficiently Extending Transformers for Long ...

Investigating Efficiently Extending Transformers for Long Input Summarization

Jason Phang, Yao Zhao, Peter J. Liu

2022-08-0816kText SummarizationLong-range modeling
PaperPDFCode(official)Code

Abstract

While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.

Results

TaskDatasetMetricValueModel
Text SummarizationArxiv HEP-TH citation graphROUGE-150Pegasus-X
Text SummarizationArxiv HEP-TH citation graphROUGE-221.8Pegasus-X
Text SummarizationArxiv HEP-TH citation graphROUGE-L44.6Pegasus-X

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15U-RWKV: Lightweight medical image segmentation with direction-adaptive RWKV2025-07-15LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models2025-07-14MambaFusion: Height-Fidelity Dense Global Fusion for Multi-modal 3D Object Detection2025-07-06UniCode$^2$: Cascaded Large-scale Codebooks for Unified Multimodal Understanding and Generation2025-06-25MSTAR: Box-free Multi-query Scene Text Retrieval with Attention Recycling2025-06-12Med-URWKV: Pure RWKV With ImageNet Pre-training For Medical Image Segmentation2025-06-12On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention2025-06-11