TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Searching for Effective Neural Extractive Summarization: W...

Searching for Effective Neural Extractive Summarization: What Works and What's Next

Ming Zhong, PengFei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang

2019-07-08ACL 2019 7Extractive Text SummarizationText SummarizationExtractive Summarization
PaperPDFCodeCode

Abstract

The recent years have seen remarkable success in the use of deep neural networks on text summarization. However, there is no clear understanding of \textit{why} they perform so well, or \textit{how} they might be improved. In this paper, we seek to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge and learning schemas. Additionally, we find an effective way to improve current frameworks and achieve the state-of-the-art result on CNN/DailyMail by a large margin based on our observations and analyses. Hopefully, our work could provide more clues for future research on extractive summarization.

Results

TaskDatasetMetricValueModel
Text SummarizationCNN / Daily MailROUGE-142.69PNBERT
Text SummarizationCNN / Daily MailROUGE-219.6PNBERT
Text SummarizationCNN / Daily MailROUGE-L38.85PNBERT
Extractive Text SummarizationCNN / Daily MailROUGE-142.69PNBERT
Extractive Text SummarizationCNN / Daily MailROUGE-219.6PNBERT
Extractive Text SummarizationCNN / Daily MailROUGE-L38.85PNBERT

Related Papers

LRCTI: A Large Language Model-Based Framework for Multi-Step Evidence Retrieval and Reasoning in Cyber Threat Intelligence Credibility Verification2025-07-15On-the-Fly Adaptive Distillation of Transformer to Dual-State Linear Attention2025-06-11Improving large language models with concept-aware fine-tuning2025-06-09MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection2025-05-29StrucSum: Graph-Structured Reasoning for Long Document Extractive Summarization with LLMs2025-05-29APE: A Data-Centric Benchmark for Efficient LLM Adaptation in Text Summarization2025-05-26FiLLM -- A Filipino-optimized Large Language Model based on Southeast Asia Large Language Model (SEALLM)2025-05-25Scaling Up Biomedical Vision-Language Models: Fine-Tuning, Instruction Tuning, and Multi-Modal Learning2025-05-23