TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MaMMUT: A Simple Architecture for Joint Learning for Multi...

MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks

Weicheng Kuo, AJ Piergiovanni, Dahun Kim, Xiyang Luo, Ben Caine, Wei Li, Abhijit Ogale, Luowei Zhou, Andrew Dai, Zhifeng Chen, Claire Cui, Anelia Angelova

2023-03-29Cross-Modal RetrievalQuestion AnsweringVideo Question AnsweringVideo CaptioningOpen Vocabulary Object DetectionRetrievalVisual Question Answering (VQA)object-detectionObject DetectionVisual Question AnsweringImage Retrieval
PaperPDFCode

Abstract

The development of language models have moved from encoder-decoder to decoder-only designs. In addition, we observe that the two most popular multimodal tasks, the generative and contrastive tasks, are nontrivial to accommodate in one architecture, and further need adaptations for downstream tasks. We propose a novel paradigm of training with a decoder-only model for multimodal tasks, which is surprisingly effective in jointly learning of these disparate vision-language tasks. This is done with a simple model, called MaMMUT. It consists of a single vision encoder and a text decoder, and is able to accommodate contrastive and generative learning by a novel two-pass approach on the text decoder. We demonstrate that joint learning of these diverse objectives is simple, effective, and maximizes the weight-sharing of the model across these tasks. Furthermore, the same architecture enables straightforward extensions to open-vocabulary object detection and video-language tasks. The model tackles a diverse range of tasks, while being modest in capacity. Our model achieves the state of the art on image-text and text-image retrieval, video question answering and open-vocabulary detection tasks, outperforming much larger and more extensively trained foundational models. It shows very competitive results on VQA and Video Captioning, especially considering its capacity. Ablations confirm the flexibility and advantages of our approach.

Results

TaskDatasetMetricValueModel
Question AnsweringCOCO Visual Question Answering (VQA) real images 1.0 open endedTest80.8MaMMUT (2B)
Visual Question Answering (VQA)MSRVTT-QAAccuracy0.495MaMMUT
Visual Question Answering (VQA)MSVD-QAAccuracy0.602MaMMUT (ours)
Visual Question Answering (VQA)COCO Visual Question Answering (VQA) real images 2.0 open endedPercentage correct80.7MaMMUT (2B)
Video CaptioningMSR-VTTCIDEr73.6MaMMUT (ours)
Video CaptioningMSVDCIDEr195.6MaMMUT
Image RetrievalFlickr30kImage-to-text R@194.9MaMMUT (ours)
Image RetrievalFlickr30kImage-to-text R@1099.9MaMMUT (ours)
Image RetrievalFlickr30kImage-to-text R@599.5MaMMUT (ours)
Image RetrievalFlickr30kRecall@182.5MaMMUT (ours)
Image RetrievalFlickr30kRecall@1098MaMMUT (ours)
Image RetrievalFlickr30kRecall@596MaMMUT (ours)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@170.7MaMMUT (ours)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1093.7MaMMUT (ours)
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@589.1MaMMUT (ours)
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@170.7MaMMUT (ours)
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1093.7MaMMUT (ours)
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@589.1MaMMUT (ours)
Cross-Modal RetrievalCOCO 2014Image-to-text R@170.7MaMMUT (ours)
Cross-Modal RetrievalCOCO 2014Image-to-text R@1093.7MaMMUT (ours)
Cross-Modal RetrievalCOCO 2014Image-to-text R@589.1MaMMUT (ours)
Visual Question AnsweringCOCO Visual Question Answering (VQA) real images 2.0 open endedPercentage correct80.7MaMMUT (2B)

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17