Xingxing Zhang, Mirella Lapata, Furu Wei, Ming Zhou
Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes \emph{directly} from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Text Summarization | CNN / Daily Mail | ROUGE-1 | 41.05 | Latent |
| Text Summarization | CNN / Daily Mail | ROUGE-2 | 18.77 | Latent |
| Text Summarization | CNN / Daily Mail | ROUGE-L | 37.54 | Latent |
| Extractive Text Summarization | CNN / Daily Mail | ROUGE-1 | 41.05 | Latent |
| Extractive Text Summarization | CNN / Daily Mail | ROUGE-2 | 18.77 | Latent |
| Extractive Text Summarization | CNN / Daily Mail | ROUGE-L | 37.54 | Latent |