TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Systematically Exploring Redundancy Reduction in Summarizi...

Systematically Exploring Redundancy Reduction in Summarizing Long Documents

Wen Xiao, Giuseppe Carenini

2020-11-30Asian Chapter of the Association for Computational Linguistics 2020Text Summarization
PaperPDFCode(official)

Abstract

Our analysis of large summarization datasets indicates that redundancy is a very serious problem when summarizing long documents. Yet, redundancy reduction has not been thoroughly investigated in neural summarization. In this work, we systematically explore and compare different ways to deal with redundancy when summarizing long documents. Specifically, we organize the existing methods into categories based on when and how the redundancy is considered. Then, in the context of these categories, we propose three additional methods balancing non-redundancy and importance in a general and flexible way. In a series of experiments, we show that our proposed methods achieve the state-of-the-art with respect to ROUGE scores on two scientific paper datasets, Pubmed and arXiv, while reducing redundancy significantly.

Results

TaskDatasetMetricValueModel
Text SummarizationArxiv HEP-TH citation graphROUGE-144.01ExtSum-LG+RdLoss
Text SummarizationArxiv HEP-TH citation graphROUGE-217.79ExtSum-LG+RdLoss
Text SummarizationArxiv HEP-TH citation graphROUGE-L39.09ExtSum-LG+RdLoss
Text SummarizationArxiv HEP-TH citation graphROUGE-143.87ExtSum-LG+MMR-Select+
Text SummarizationArxiv HEP-TH citation graphROUGE-217.5ExtSum-LG+MMR-Select+
Text SummarizationArxiv HEP-TH citation graphROUGE-L38.97ExtSum-LG+MMR-Select+
Text SummarizationPubmedROUGE-145.39ExtSum-LG+MMR-Select+
Text SummarizationPubmedROUGE-220.37ExtSum-LG+MMR-Select+
Text SummarizationPubmedROUGE-L40.99ExtSum-LG+MMR-Select+
Text SummarizationPubmedROUGE-145.3ExtSum-LG+RdLoss
Text SummarizationPubmedROUGE-220.42ExtSum-LG+RdLoss
Text SummarizationPubmedROUGE-L40.95ExtSum-LG+RdLoss

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention2025-06-11Improving large language models with concept-aware fine-tuning2025-06-09MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection2025-05-29APE: A Data-Centric Benchmark for Efficient LLM Adaptation in Text Summarization2025-05-26FiLLM -- A Filipino-optimized Large Language Model based on Southeast Asia Large Language Model (SEALLM)2025-05-25Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal Learning2025-05-23A Structured Literature Review on Traditional Approaches in Current Natural Language Processing2025-05-19