TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Bottom-Up Abstractive Summarization

Bottom-Up Abstractive Summarization

Sebastian Gehrmann, Yuntian Deng, Alexander M. Rush

2018-08-31EMNLP 2018 10Multi-Document SummarizationAbstractive Text SummarizationDocument Summarization
PaperPDFCodeCode(official)CodeCodeCode

Abstract

Neural network-based methods for abstractive summarization produce outputs that are more fluent than other techniques, but which can be poor at content selection. This work proposes a simple technique for addressing this issue: use a data-efficient content selector to over-determine phrases in a source document that should be part of the summary. We use this selector as a bottom-up attention step to constrain the model to likely phrases. We show that this approach improves the ability to compress text, while still generating fluent summaries. This two-step process is both simpler and higher performing than other end-to-end content selection models, leading to significant improvements on ROUGE for both the CNN-DM and NYT corpus. Furthermore, the content selector can be trained with as little as 1,000 sentences, making it easy to transfer a trained summarizer to a new domain.

Results

TaskDatasetMetricValueModel
Text GenerationMulti-NewsROUGE-143.57CopyTransformer
Text GenerationMulti-NewsROUGE-214.03CopyTransformer
Text GenerationMulti-NewsROUGE-SU417.37CopyTransformer
Text GenerationMulti-NewsROUGE-142.8PG-BRNN
Text GenerationMulti-NewsROUGE-214.19PG-BRNN
Text GenerationMulti-NewsROUGE-SU416.75PG-BRNN
Text SummarizationCNN / Daily MailROUGE-141.22Bottom-Up Summarization
Text SummarizationCNN / Daily MailROUGE-218.68Bottom-Up Summarization
Text SummarizationCNN / Daily MailROUGE-L38.34Bottom-Up Summarization
Text SummarizationCNN / Daily MailPPL32.75Bottom-Up Sum
Text SummarizationCNN / Daily MailROUGE-141.22Bottom-Up Sum
Text SummarizationCNN / Daily MailROUGE-218.68Bottom-Up Sum
Text SummarizationCNN / Daily MailROUGE-L38.34Bottom-Up Sum
Text SummarizationMulti-NewsROUGE-143.57CopyTransformer
Text SummarizationMulti-NewsROUGE-214.03CopyTransformer
Text SummarizationMulti-NewsROUGE-SU417.37CopyTransformer
Text SummarizationMulti-NewsROUGE-142.8PG-BRNN
Text SummarizationMulti-NewsROUGE-214.19PG-BRNN
Text SummarizationMulti-NewsROUGE-SU416.75PG-BRNN
Abstractive Text SummarizationCNN / Daily MailROUGE-141.22Bottom-Up Summarization
Abstractive Text SummarizationCNN / Daily MailROUGE-218.68Bottom-Up Summarization
Abstractive Text SummarizationCNN / Daily MailROUGE-L38.34Bottom-Up Summarization
Document SummarizationCNN / Daily MailPPL32.75Bottom-Up Sum
Document SummarizationCNN / Daily MailROUGE-141.22Bottom-Up Sum
Document SummarizationCNN / Daily MailROUGE-218.68Bottom-Up Sum
Document SummarizationCNN / Daily MailROUGE-L38.34Bottom-Up Sum

Related Papers

GenerationPrograms: Fine-grained Attribution with Executable Programs2025-06-17Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences2025-06-16Improving Fairness of Large Language Models in Multi-document Summarization2025-06-09Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs2025-06-03ARC: Argument Representation and Coverage Analysis for Zero-Shot Long Document Summarization with Instruction Following LLMs2025-05-29Ask, Retrieve, Summarize: A Modular Pipeline for Scientific Literature Summarization2025-05-22Power-Law Decay Loss for Large Language Model Finetuning: Focusing on Information Sparsity to Enhance Generation Quality2025-05-22Hallucinate at the Last in Long Response Generation: A Case Study on Long Document Summarization2025-05-21