TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MemSum: Extractive Summarization of Long Documents Using M...

MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes

Nianlong Gu, Elliott Ash, Richard H. R. Hahnloser

2021-07-19ACL 2022 5Extractive Text SummarizationText SummarizationExtractive Summarization
PaperPDFCode(official)

Abstract

We introduce MemSum (Multi-step Episodic Markov decision process extractive SUMmarizer), a reinforcement-learning-based extractive summarizer enriched at each step with information on the current extraction history. When MemSum iteratively selects sentences into the summary, it considers a broad information set that would intuitively also be used by humans in this task: 1) the text content of the sentence, 2) the global text context of the rest of the document, and 3) the extraction history consisting of the set of sentences that have already been extracted. With a lightweight architecture, MemSum obtains state-of-the-art test-set performance (ROUGE) in summarizing long documents taken from PubMed, arXiv, and GovReport. Ablation studies demonstrate the importance of local, global, and history information. A human evaluation confirms the high quality and low redundancy of the generated summaries, stemming from MemSum's awareness of extraction history.

Results

TaskDatasetMetricValueModel
Text SummarizationArxiv HEP-TH citation graphROUGE-148.42MemSum (extractive)
Text SummarizationArxiv HEP-TH citation graphROUGE-220.3MemSum (extractive)
Text SummarizationArxiv HEP-TH citation graphROUGE-L42.54MemSum (extractive)
Text SummarizationPubmedROUGE-149.25MemSum (extractive)
Text SummarizationPubmedROUGE-222.94MemSum (extractive)
Text SummarizationPubmedROUGE-L44.42MemSum (extractive)
Text SummarizationGovReportAvg. Test Rouge159.43MemSum (extractive)
Text SummarizationGovReportAvg. Test Rouge228.6MemSum (extractive)
Text SummarizationGovReportAvg. Test RougeLsum56.69MemSum (extractive)
Extractive Text SummarizationGovReportAvg. Test Rouge159.43MemSum (extractive)
Extractive Text SummarizationGovReportAvg. Test Rouge228.6MemSum (extractive)
Extractive Text SummarizationGovReportAvg. Test RougeLsum56.69MemSum (extractive)

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention2025-06-11Improving large language models with concept-aware fine-tuning2025-06-09MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection2025-05-29StrucSum: Graph-Structured Reasoning for Long Document Extractive Summarization with LLMs2025-05-29APE: A Data-Centric Benchmark for Efficient LLM Adaptation in Text Summarization2025-05-26FiLLM -- A Filipino-optimized Large Language Model based on Southeast Asia Large Language Model (SEALLM)2025-05-25Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal Learning2025-05-23