TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/HIBERT: Document Level Pre-training of Hierarchical Bidire...

HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization

Xingxing Zhang, Furu Wei, Ming Zhou

2019-05-16ACL 2019 7Extractive Text SummarizationDocument SummarizationExtractive Summarization
PaperPDF

Abstract

Neural extractive summarization models usually employ a hierarchical encoder for document encoding and they are trained using sentence-level labels, which are created heuristically using rule-based methods. Training the hierarchical encoder with these \emph{inaccurate} labels is challenging. Inspired by the recent work on pre-training transformer sentence encoders \cite{devlin:2018:arxiv}, we propose {\sc Hibert} (as shorthand for {\bf HI}erachical {\bf B}idirectional {\bf E}ncoder {\bf R}epresentations from {\bf T}ransformers) for document encoding and a method to pre-train it using unlabeled data. We apply the pre-trained {\sc Hibert} to our summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 ROUGE on a version of New York Times dataset. We also achieve the state-of-the-art performance on these two datasets.

Results

TaskDatasetMetricValueModel
Text SummarizationCNN / Daily MailROUGE-142.37HIBERT
Text SummarizationCNN / Daily MailROUGE-219.95HIBERT
Text SummarizationCNN / Daily MailROUGE-L38.83HIBERT
Extractive Text SummarizationCNN / Daily MailROUGE-142.37HIBERT
Extractive Text SummarizationCNN / Daily MailROUGE-219.95HIBERT
Extractive Text SummarizationCNN / Daily MailROUGE-L38.83HIBERT

Related Papers

GenerationPrograms: Fine-grained Attribution with Executable Programs2025-06-17Arctic Long Sequence Training: Scalable And Efficient Training For Multi-Million Token Sequences2025-06-16Improving Fairness of Large Language Models in Multi-document Summarization2025-06-09ARC: Argument Representation and Coverage Analysis for Zero-Shot Long Document Summarization with Instruction Following LLMs2025-05-29StrucSum: Graph-Structured Reasoning for Long Document Extractive Summarization with LLMs2025-05-29Ask, Retrieve, Summarize: A Modular Pipeline for Scientific Literature Summarization2025-05-22Hallucinate at the Last in Long Response Generation: A Case Study on Long Document Summarization2025-05-21Document Attribution: Examining Citation Relationships using Large Language Models2025-05-09