TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Closed-Book Training to Improve Summarization Encoder Memory

Closed-Book Training to Improve Summarization Encoder Memory

Yichen Jiang, Mohit Bansal

2018-09-12EMNLP 2018 10Abstractive Text SummarizationMemorization
PaperPDF

Abstract

A good neural sequence-to-sequence summarization model should have a strong encoder that can distill and memorize the important information from long input texts so that the decoder can generate salient summaries based on the encoder's memory. In this paper, we aim to improve the memorization capabilities of the encoder of a pointer-generator model by adding an additional 'closed-book' decoder without attention and pointer mechanisms. Such a decoder forces the encoder to be more selective in the information encoded in its memory state because the decoder can't rely on the extra information provided by the attention and possibly copy modules, and hence improves the entire model. On the CNN/Daily Mail dataset, our 2-decoder model outperforms the baseline significantly in terms of ROUGE and METEOR metrics, for both cross-entropy and reinforced setups (and on human evaluation). Moreover, our model also achieves higher scores in a test-only DUC-2002 generalizability setup. We further present a memory ability test, two saliency metrics, as well as several sanity-check ablations (based on fixed-encoder, gradient-flow cut, and model capacity) to prove that the encoder of our 2-decoder model does in fact learn stronger memory representations than the baseline encoder.

Results

TaskDatasetMetricValueModel
Text SummarizationCNN / Daily MailROUGE-140.66RL + pg + cbdec
Text SummarizationCNN / Daily MailROUGE-217.87RL + pg + cbdec
Text SummarizationCNN / Daily MailROUGE-L37.06RL + pg + cbdec
Abstractive Text SummarizationCNN / Daily MailROUGE-140.66RL + pg + cbdec
Abstractive Text SummarizationCNN / Daily MailROUGE-217.87RL + pg + cbdec
Abstractive Text SummarizationCNN / Daily MailROUGE-L37.06RL + pg + cbdec

Related Papers

What Should LLMs Forget? Quantifying Personal Data in LLMs for Right-to-Be-Forgotten Requests2025-07-15Reasoning or Memorization? Unreliable Results of Reinforcement Learning Due to Data Contamination2025-07-14Entropy-Memorization Law: Evaluating Memorization Difficulty of Data in LLMs2025-07-08MMReason: An Open-Ended Multi-Modal Multi-Step Reasoning Benchmark for MLLMs Toward AGI2025-06-30Listener-Rewarded Thinking in VLMs for Image Preferences2025-06-28Where to find Grokking in LLM Pretraining? Monitor Memorization-to-Generalization without Test2025-06-26Counterfactual Influence as a Distributional Quantity2025-06-25Leaner Training, Lower Leakage: Revisiting Memorization in LLM Fine-Tuning with LoRA2025-06-25