TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Text Summarization with Pretrained Encoders

Text Summarization with Pretrained Encoders

Yang Liu, Mirella Lapata

2019-08-22IJCNLP 2019 11Abstractive Text SummarizationExtractive Text SummarizationText SummarizationDocument SummarizationExtractive Document Summarization
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCodeCode

Abstract

Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm

Results

TaskDatasetMetricValueModel
Text SummarizationX-SumROUGE-138.81BertSumExtAbs
Text SummarizationX-SumROUGE-216.5BertSumExtAbs
Text SummarizationX-SumROUGE-331.27BertSumExtAbs
Text SummarizationCNN / Daily MailROUGE-142.13BertSumExtAbs
Text SummarizationCNN / Daily MailROUGE-219.6BertSumExtAbs
Text SummarizationCNN / Daily MailROUGE-L39.18BertSumExtAbs
Text SummarizationCNN / Daily MailROUGE-143.85BertSumExt
Text SummarizationCNN / Daily MailROUGE-220.34BertSumExt
Text SummarizationCNN / Daily MailROUGE-L39.9BertSumExt
Abstractive Text SummarizationCNN / Daily MailROUGE-142.13BertSumExtAbs
Abstractive Text SummarizationCNN / Daily MailROUGE-219.6BertSumExtAbs
Abstractive Text SummarizationCNN / Daily MailROUGE-L39.18BertSumExtAbs
Document SummarizationCNN / Daily MailROUGE-143.85BertSumExt
Document SummarizationCNN / Daily MailROUGE-220.34BertSumExt
Document SummarizationCNN / Daily MailROUGE-L39.9BertSumExt

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15GenerationPrograms: Fine-grained Attribution with Executable Programs2025-06-17Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences2025-06-16On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention2025-06-11Improving large language models with concept-aware fine-tuning2025-06-09Improving Fairness of Large Language Models in Multi-document Summarization2025-06-09Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs2025-06-03ARC: Argument Representation and Coverage Analysis for Zero-Shot Long Document Summarization with Instruction Following LLMs2025-05-29