TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Controlling the Amount of Verbatim Copying in Abstractive ...

Controlling the Amount of Verbatim Copying in Abstractive Summarization

Kaiqiang Song, Bingqing Wang, Zhe Feng, Liu Ren, Fei Liu

2019-11-23Abstractive Text SummarizationText SummarizationLanguage Modelling
PaperPDFCode(official)

Abstract

An abstract must not change the meaning of the original text. A single most effective way to achieve that is to increase the amount of copying while still allowing for text abstraction. Human editors can usually exercise control over copying, resulting in summaries that are more extractive than abstractive, or vice versa. However, it remains poorly understood whether modern neural abstractive summarizers can provide the same flexibility, i.e., learning from single reference summaries to generate multiple summary hypotheses with varying degrees of copying. In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones. We frame the task of summarization as language modeling and exploit alternative mechanisms to generate summary hypotheses. Our method allows for control over copying during both training and decoding stages of a neural summarization model. Through extensive experiments we illustrate the significance of our proposed method on controlling the amount of verbatim copying and achieve competitive results over strong baselines. Our analysis further reveals interesting and unobvious facts.

Results

TaskDatasetMetricValueModel
Text SummarizationGigaWordROUGE-139.19ControlCopying + BPNorm
Text SummarizationGigaWordROUGE-220.38ControlCopying + BPNorm
Text SummarizationGigaWordROUGE-L36.69ControlCopying + BPNorm
Text SummarizationGigaWordROUGE-139.08ControlCopying + SBWR
Text SummarizationGigaWordROUGE-220.47ControlCopying + SBWR
Text SummarizationGigaWordROUGE-L36.69ControlCopying + SBWR

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16