TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

SotA/Natural Language Processing/Image Captioning/nocaps-val-overall

Image Captioning on nocaps-val-overall

Metric: CIDEr (higher is better)

LeaderboardDataset
Loading chart...

Results

Submit a result
#Model↕CIDEr▼Extra DataPaperDate↕Code
1BLIP-2 ViT-G FlanT5 XL (zero-shot)121.6NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
2BLIP-2 ViT-G OPT 6.7B (zero-shot)121NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
3BLIP-2 ViT-G OPT 2.7B (zero-shot)119.7NoBLIP-2: Bootstrapping Language-Image Pre-trainin...2023-01-30Code
4LEMON_large113.4NoScaling Up Vision-Language Pre-training for Imag...2021-11-24-
5BLIP_ViT-L113.2NoBLIP: Bootstrapping Language-Image Pre-training ...2022-01-28Code
6SimVLM112.2NoSimVLM: Simple Visual Language Model Pretraining...2021-08-24Code
7BLIP_CapFilt-L109.6NoBLIP: Bootstrapping Language-Image Pre-training ...2022-01-28Code
8OmniVL107.5NoOmniVL:One Foundation Model for Image-Language a...2022-09-15-
9VinVL95.5NoVinVL: Revisiting Visual Representations in Visi...2021-01-02Code
10Enc-Dec90.2NoConceptual 12M: Pushing Web-Scale Image-Text Pre...2021-02-17Code
11OSCAR80.9NoOscar: Object-Semantics Aligned Pre-training for...2020-04-13Code