TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Summary Level Training of Sentence Rewriting for Abstracti...

Summary Level Training of Sentence Rewriting for Abstractive Summarization

Sanghwan Bae, Taeuk Kim, Jihoon Kim, Sang-goo Lee

2019-09-19WS 2019 11Reinforcement LearningAbstractive Text SummarizationExtractive Text SummarizationNatural Language UnderstandingSentence ReWriting
PaperPDF

Abstract

As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.

Results

TaskDatasetMetricValueModel
Text SummarizationCNN / Daily MailROUGE-141.9BERT-ext + abs + RL + rerank
Text SummarizationCNN / Daily MailROUGE-219.08BERT-ext + abs + RL + rerank
Text SummarizationCNN / Daily MailROUGE-L39.64BERT-ext + abs + RL + rerank
Text SummarizationCNN / Daily MailROUGE-142.76BERT-ext + RL
Text SummarizationCNN / Daily MailROUGE-219.87BERT-ext + RL
Text SummarizationCNN / Daily MailROUGE-L39.11BERT-ext + RL
Extractive Text SummarizationCNN / Daily MailROUGE-142.76BERT-ext + RL
Extractive Text SummarizationCNN / Daily MailROUGE-219.87BERT-ext + RL
Extractive Text SummarizationCNN / Daily MailROUGE-L39.11BERT-ext + RL
Abstractive Text SummarizationCNN / Daily MailROUGE-141.9BERT-ext + abs + RL + rerank
Abstractive Text SummarizationCNN / Daily MailROUGE-219.08BERT-ext + abs + RL + rerank
Abstractive Text SummarizationCNN / Daily MailROUGE-L39.64BERT-ext + abs + RL + rerank

Related Papers

CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Autonomous Resource Management in Microservice Systems via Reinforcement Learning2025-07-17