TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An Editorial Network for Enhanced Document Summarization

An Editorial Network for Enhanced Document Summarization

Edward Moroshko, Guy Feigenblat, Haggai Roitman, David Konopnicki

2019-02-27WS 2019 11Abstractive Text SummarizationDocument Summarization
PaperPDF

Abstract

We suggest a new idea of Editorial Network - a mixed extractive-abstractive summarization approach, which is applied as a post-processing step over a given sequence of extracted sentences. Our network tries to imitate the decision process of a human editor during summarization. Within such a process, each extracted sentence may be either kept untouched, rephrased or completely rejected. We further suggest an effective way for training the "editor" based on a novel soft-labeling approach. Using the CNN/DailyMail dataset we demonstrate the effectiveness of our approach compared to state-of-the-art extractive-only or abstractive-only baseline methods.

Results

TaskDatasetMetricValueModel
Text SummarizationCNN / Daily MailROUGE-141.42EditNet
Text SummarizationCNN / Daily MailROUGE-219.03EditNet
Text SummarizationCNN / Daily MailROUGE-L38.36EditNet
Abstractive Text SummarizationCNN / Daily MailROUGE-141.42EditNet
Abstractive Text SummarizationCNN / Daily MailROUGE-219.03EditNet
Abstractive Text SummarizationCNN / Daily MailROUGE-L38.36EditNet

Related Papers

GenerationPrograms: Fine-grained Attribution with Executable Programs2025-06-17Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences2025-06-16Improving Fairness of Large Language Models in Multi-document Summarization2025-06-09Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs2025-06-03ARC: Argument Representation and Coverage Analysis for Zero-Shot Long Document Summarization with Instruction Following LLMs2025-05-29Power-Law Decay Loss for Large Language Model Finetuning: Focusing on Information Sparsity to Enhance Generation Quality2025-05-22Ask, Retrieve, Summarize: A Modular Pipeline for Scientific Literature Summarization2025-05-22Hallucinate at the Last in Long Response Generation: A Case Study on Long Document Summarization2025-05-21