TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Image as a Foreign Language: BEiT Pretraining for All Visi...

Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks

Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei

2022-08-22Cross-Modal RetrievalQuestion AnsweringZero-Shot Cross-Modal RetrievalImage ClassificationMasked Language ModelingSemantic SegmentationImage CaptioningVisual ReasoningInstance SegmentationAllRetrievalVisual Question Answering (VQA)Object DetectionLanguage ModellingVisual Question Answering
PaperPDFCodeCode(official)

Abstract

A big convergence of language, vision, and multimodal pretraining is emerging. In this work, we introduce a general-purpose multimodal foundation model BEiT-3, which achieves state-of-the-art transfer performance on both vision and vision-language tasks. Specifically, we advance the big convergence from three aspects: backbone architecture, pretraining task, and model scaling up. We introduce Multiway Transformers for general-purpose modeling, where the modular architecture enables both deep fusion and modality-specific encoding. Based on the shared backbone, we perform masked "language" modeling on images (Imglish), texts (English), and image-text pairs ("parallel sentences") in a unified manner. Experimental results show that BEiT-3 obtains state-of-the-art performance on object detection (COCO), semantic segmentation (ADE20K), image classification (ImageNet), visual reasoning (NLVR2), visual question answering (VQAv2), image captioning (COCO), and cross-modal retrieval (Flickr30K, COCO).

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)VQA v2 test-devAccuracy84.19BEiT-3
Visual Question Answering (VQA)VQA v2 test-stdoverall84.03BEiT-3
Visual ReasoningNLVR2 DevAccuracy91.51BEiT-3
Visual ReasoningNLVR2 TestAccuracy92.58BEiT-3
Semantic SegmentationADE20K valmIoU62.8BEiT-3
Semantic SegmentationADE20KParams (M)1900BEiT-3
Semantic SegmentationADE20KValidation mIoU62.8BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@198BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@10100BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@5100BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@190.3BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@1099.5BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@598.7BEiT-3
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@184.8BEiT-3
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@1098.3BEiT-3
Image Retrieval with Multi-Modal QueryCOCO 2014Image-to-text R@596.5BEiT-3
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@167.2BEiT-3
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@1087.7BEiT-3
Image Retrieval with Multi-Modal QueryCOCO 2014Text-to-image R@592.8BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@194.9BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@10100BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@599.9BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@181.5BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@1097.8BEiT-3
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@595.6BEiT-3
Object DetectionCOCO test-devbox mAP63.7BEiT-3
3DCOCO test-devbox mAP63.7BEiT-3
Instance SegmentationCOCO test-devmask AP54.8BEiT-3
2D ClassificationCOCO test-devbox mAP63.7BEiT-3
2D Object DetectionCOCO test-devbox mAP63.7BEiT-3
Cross-Modal Information RetrievalFlickr30kImage-to-text R@198BEiT-3
Cross-Modal Information RetrievalFlickr30kImage-to-text R@10100BEiT-3
Cross-Modal Information RetrievalFlickr30kImage-to-text R@5100BEiT-3
Cross-Modal Information RetrievalFlickr30kText-to-image R@190.3BEiT-3
Cross-Modal Information RetrievalFlickr30kText-to-image R@1099.5BEiT-3
Cross-Modal Information RetrievalFlickr30kText-to-image R@598.7BEiT-3
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@184.8BEiT-3
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@1098.3BEiT-3
Cross-Modal Information RetrievalCOCO 2014Image-to-text R@596.5BEiT-3
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@167.2BEiT-3
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@1087.7BEiT-3
Cross-Modal Information RetrievalCOCO 2014Text-to-image R@592.8BEiT-3
Cross-Modal RetrievalFlickr30kImage-to-text R@198BEiT-3
Cross-Modal RetrievalFlickr30kImage-to-text R@10100BEiT-3
Cross-Modal RetrievalFlickr30kImage-to-text R@5100BEiT-3
Cross-Modal RetrievalFlickr30kText-to-image R@190.3BEiT-3
Cross-Modal RetrievalFlickr30kText-to-image R@1099.5BEiT-3
Cross-Modal RetrievalFlickr30kText-to-image R@598.7BEiT-3
Cross-Modal RetrievalCOCO 2014Image-to-text R@184.8BEiT-3
Cross-Modal RetrievalCOCO 2014Image-to-text R@1098.3BEiT-3
Cross-Modal RetrievalCOCO 2014Image-to-text R@596.5BEiT-3
Cross-Modal RetrievalCOCO 2014Text-to-image R@167.2BEiT-3
Cross-Modal RetrievalCOCO 2014Text-to-image R@1087.7BEiT-3
Cross-Modal RetrievalCOCO 2014Text-to-image R@592.8BEiT-3
10-shot image generationADE20K valmIoU62.8BEiT-3
10-shot image generationADE20KParams (M)1900BEiT-3
10-shot image generationADE20KValidation mIoU62.8BEiT-3
16kCOCO test-devbox mAP63.7BEiT-3

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17