Ting-Yao Hsu, C. Lee Giles, Ting-Hao 'Kenneth' Huang
Researchers use figures to communicate rich, complex information in scientific papers. The captions of these figures are critical to conveying effective messages. However, low-quality figure captions commonly occur in scientific articles and may decrease understanding. In this paper, we propose an end-to-end neural framework to automatically generate informative, high-quality captions for scientific figures. To this end, we introduce SCICAP, a large-scale figure-caption dataset based on computer science arXiv papers published between 2010 and 2020. After pre-processing - including figure-type classification, sub-figure identification, text normalization, and caption text selection - SCICAP contained more than two million figures extracted from over 290,000 papers. We then established baseline models that caption graph plots, the dominant (19.2%) figure type. The experimental results showed both opportunities and steep challenges of generating captions for scientific figures.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Image Captioning | SCICAP | BLEU-4 | 0.0219 | CNN+LSTM (Vision only, First sentence) |
| Image Captioning | SCICAP | BLEU-4 | 0.0213 | CNN+LSTM (Text only, First sentence) |
| Image Captioning | SCICAP | BLEU-4 | 0.0212 | CNN+LSTM (Text only, Single-Sent Caption) |
| Image Captioning | SCICAP | BLEU-4 | 0.0207 | CNN+LSTM (Vision only, Single-Sent Caption) |
| Image Captioning | SCICAP | BLEU-4 | 0.0205 | CNN+LSTM (Vision + Text, First sentence) |
| Image Captioning | SCICAP | BLEU-4 | 0.0202 | CNN+LSTM (Vision + Text, Single-Sent Caption) |
| Image Captioning | SCICAP | BLEU-4 | 0.0172 | CNN+LSTM (Vision only, Caption w/ <=100 words) |
| Image Captioning | SCICAP | BLEU-4 | 0.0168 | CNN+LSTM (Vision + Text, Caption w/ <=100 words) |
| Image Captioning | SCICAP | BLEU-4 | 0.0165 | CNN+LSTM (Text only, Caption w/ <=100 words) |