TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Meshed-Memory Transformer for Image Captioning

Meshed-Memory Transformer for Image Captioning

Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, Rita Cucchiara

2019-12-17CVPR 2020 6Machine TranslationText GenerationTranslationImage Captioning
PaperPDFCodeCode(official)

Abstract

Transformer-based architectures represent the state of the art in sequence modeling tasks like machine translation and language understanding. Their applicability to multi-modal contexts like image captioning, however, is still largely under-explored. With the aim of filling this gap, we present M$^2$ - a Meshed Transformer with Memory for Image Captioning. The architecture improves both the image encoding and the language generation steps: it learns a multi-level representation of the relationships between image regions integrating learned a priori knowledge, and uses a mesh-like connectivity at decoding stage to exploit low- and high-level features. Experimentally, we investigate the performance of the M$^2$ Transformer and different fully-attentive models in comparison with recurrent ones. When tested on COCO, our proposal achieves a new state of the art in single-model and ensemble configurations on the "Karpathy" test split and on the online test server. We also assess its performances when describing objects unseen in the training set. Trained models and code for reproducing the experiments are publicly available at: https://github.com/aimagelab/meshed-memory-transformer.

Results

TaskDatasetMetricValueModel
Image CaptioningCOCO CaptionsBLEU-180.8Meshed-Memory Transformer
Image CaptioningCOCO CaptionsBLEU-439.1Meshed-Memory Transformer
Image CaptioningCOCO CaptionsCIDER131.2Meshed-Memory Transformer
Image CaptioningCOCO CaptionsMETEOR29.2Meshed-Memory Transformer
Image CaptioningCOCO CaptionsROUGE-L58.6Meshed-Memory Transformer
Image CaptioningCOCO CaptionsSPICE22.6Meshed-Memory Transformer
Image CaptioningCOCO (Common Objects in Context)CIDEr131.2M2 Transformer

Related Papers

Making Language Model a Hierarchical Classifier and Generator2025-07-17A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17Mitigating Object Hallucinations via Sentence-Level Early Intervention2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs2025-07-15Seq vs Seq: An Open Suite of Paired Encoders and Decoders2025-07-15Hashed Watermark as a Filter: Defeating Forging and Overwriting Attacks in Weight-based Neural Network Watermarking2025-07-15Function-to-Style Guidance of LLMs for Code Translation2025-07-15