TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/A Deep Reinforced Model for Abstractive Summarization

A Deep Reinforced Model for Abstractive Summarization

Romain Paulus, Caiming Xiong, Richard Socher

2017-05-11ICLR 2018 1Reinforcement LearningAbstractive Text SummarizationText SummarizationPrediction
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL). Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable. We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries.

Results

TaskDatasetMetricValueModel
Text SummarizationCNN / Daily Mail (Anonymized)ROUGE-139.87ML+RL, with intra-attention
Text SummarizationCNN / Daily Mail (Anonymized)ROUGE-215.82ML+RL, with intra-attention
Text SummarizationCNN / Daily Mail (Anonymized)ROUGE-L36.9ML+RL, with intra-attention
Text SummarizationCNN / Daily MailROUGE-139.87ML + RL (Paulus et al., 2017)
Text SummarizationCNN / Daily MailROUGE-215.82ML + RL (Paulus et al., 2017)
Text SummarizationCNN / Daily MailROUGE-L36.9ML + RL (Paulus et al., 2017)
Text SummarizationCNN / Daily MailROUGE-138.3ML + Intra-Attention (Paulus et al., 2017)
Text SummarizationCNN / Daily MailROUGE-214.81ML + Intra-Attention (Paulus et al., 2017)
Text SummarizationCNN / Daily MailROUGE-L35.49ML + Intra-Attention (Paulus et al., 2017)
Document SummarizationCNN / Daily MailROUGE-139.87ML + RL (Paulus et al., 2017)
Document SummarizationCNN / Daily MailROUGE-215.82ML + RL (Paulus et al., 2017)
Document SummarizationCNN / Daily MailROUGE-L36.9ML + RL (Paulus et al., 2017)
Document SummarizationCNN / Daily MailROUGE-138.3ML + Intra-Attention (Paulus et al., 2017)
Document SummarizationCNN / Daily MailROUGE-214.81ML + Intra-Attention (Paulus et al., 2017)
Document SummarizationCNN / Daily MailROUGE-L35.49ML + Intra-Attention (Paulus et al., 2017)

Related Papers

Multi-Strategy Improved Snake Optimizer Accelerated CNN-LSTM-Attention-Adaboost for Trajectory Prediction2025-07-21CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17