TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/End-to-End Long Document Summarization using Gradient Cach...

End-to-End Long Document Summarization using Gradient Caching

Rohit Saxena, Hao Tang, Frank Keller

2025-01-03Long-Form Narrative SummarizationDocument Summarization
PaperPDF

Abstract

Training transformer-based encoder-decoder models for long document summarization poses a significant challenge due to the quadratic memory consumption during training. Several approaches have been proposed to extend the input length at test time, but training with these approaches is still difficult, requiring truncation of input documents and causing a mismatch between training and test conditions. In this work, we propose CachED (Gradient $\textbf{Cach}$ing for $\textbf{E}$ncoder-$\textbf{D}$ecoder models), an approach that enables end-to-end training of existing transformer-based encoder-decoder models, using the entire document without truncation. Specifically, we apply non-overlapping sliding windows to input documents, followed by fusion in decoder. During backpropagation, the gradients are cached at the decoder and are passed through the encoder in chunks by re-computing the hidden vectors, similar to gradient checkpointing. In the experiments on long document summarization, we extend BART to CachED BART, processing more than 500K tokens during training and achieving superior performance without using any additional parameters.

Results

TaskDatasetMetricValueModel
Text SummarizationBookSumBERTScore (F1)54.4CachED (BART Large)
Text SummarizationBookSumBERTScore (F1)52.4SLED (BART Large)
Text SummarizationBookSumBERTScore (F1)51.5Unlimiformer (BART Base)
Text SummarizationBookSumBERTScore (F1)47.24Zero-Shot (GPT-4o)
Text SummarizationBookSumROUGE-120.3Zero-Shot (GPT-4o)
Text SummarizationBookSumROUGE-23.5Zero-Shot (GPT-4o)
Text SummarizationBookSumROUGE-L17.68Zero-Shot (GPT-4o)
Text SummarizationSummScreenBERTScore (F1)61.59CachED (BART Large)
Text SummarizationSummScreenBERTScore (F1)59.9SLED (BART Large)
Text SummarizationSummScreenBERTScore (F1)58.5Unlimiformer (BART Base)
Text SummarizationMENSABERTScore (F1)64.6CachED (BART Large)
Text SummarizationMENSABERTScore (F1)58.7Unlimiformer (BART Base)
Text SummarizationMENSABERTScore (F1)58.3SLED (BART Large)
Text SummarizationMENSABERTScore (F1)52.8Zero-Shot (GPT-4o)

Related Papers

GenerationPrograms: Fine-grained Attribution with Executable Programs2025-06-17Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences2025-06-16Improving Fairness of Large Language Models in Multi-document Summarization2025-06-09NexusSum: Hierarchical LLM Agents for Long-Form Narrative Summarization2025-05-30ARC: Argument Representation and Coverage Analysis for Zero-Shot Long Document Summarization with Instruction Following LLMs2025-05-29Ask, Retrieve, Summarize: A Modular Pipeline for Scientific Literature Summarization2025-05-22Hallucinate at the Last in Long Response Generation: A Case Study on Long Document Summarization2025-05-21Document Attribution: Examining Citation Relationships using Large Language Models2025-05-09