Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, Yejin Choi
Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans. This is in contrast to the reference-free manner in which humans assess caption quality. In this paper, we report the surprising empirical finding that CLIP (Radford et al., 2021), a cross-modal model pretrained on 400M image+caption pairs from the web, can be used for robust automatic evaluation of image captioning without the need for references. Experiments spanning several corpora demonstrate that our new reference-free metric, CLIPScore, achieves the highest correlation with human judgements, outperforming existing reference-based metrics like CIDEr and SPICE. Information gain experiments demonstrate that CLIPScore, with its tight focus on image-text compatibility, is complementary to existing reference-based metrics that emphasize text-text similarities. Thus, we also present a reference-augmented version, RefCLIPScore, which achieves even higher correlation. Beyond literal description tasks, several case studies reveal domains where CLIPScore performs well (clip-art images, alt-text rating), but also where it is relatively weaker in comparison to reference-based metrics, e.g., news captions that require richer contextual knowledge.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Human Judgment Correlation | Flickr8k-CF | Kendall's Tau-b | 36.4 | RefCLIP-S |
| Human Judgment Correlation | Flickr8k-CF | Kendall's Tau-b | 34.4 | CLIP-S |
| Human Judgment Correlation | Flickr8k-Expert | Kendall's Tau-c | 53 | RefCLIP-S |
| Human Judgment Correlation | Flickr8k-Expert | Kendall's Tau-c | 51.2 | CLIP-S |
| Human Judgment Classification | Pascal-50S | Mean Accuracy | 83.1 | RefCLIP-S |
| Human Judgment Classification | Pascal-50S | Mean Accuracy | 80.7 | CLIP-S |