TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Reproducible scaling laws for contrastive language-image l...

Reproducible scaling laws for contrastive language-image learning

Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, Jenia Jitsev

2022-12-14CVPR 2023 1Zero-Shot Cross-Modal RetrievalOpen Vocabulary Attribute DetectionImage ClassificationZero-Shot Image ClassificationRetrievalZero-Shot Learning
PaperPDFCode(official)CodeCodeCodeCode

Abstract

Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale experiments are becoming increasingly expensive. However, previous work on scaling laws has primarily used private data \& models or focused on uni-modal language or vision learning. To address these limitations, we investigate scaling laws for contrastive language-image pre-training (CLIP) with the public LAION dataset and the open-source OpenCLIP repository. Our large-scale experiments involve models trained on up to two billion image-text pairs and identify power law scaling for multiple downstream tasks including zero-shot classification, retrieval, linear probing, and end-to-end fine-tuning. We find that the training distribution plays a key role in scaling laws as the OpenAI and OpenCLIP models exhibit different scaling behavior despite identical model architectures and similar training recipes. We open-source our evaluation workflow and all models, including the largest public CLIP models, to ensure reproducibility and make scaling laws research more accessible. Source code and instructions to reproduce this study will be available at https://github.com/LAION-AI/scaling-laws-openclip

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QueryFlickr30kImage-to-text R@599.3OpenCLIP VIT-H/14
Image Retrieval with Multi-Modal QueryFlickr30kText-to-image R@594.1OpenCLIP VIT-H/14
Object DetectionOVAD-Box benchmarkmean average precision17Open CLIP ViT-B32
3DOVAD-Box benchmarkmean average precision17Open CLIP ViT-B32
2D ClassificationOVAD-Box benchmarkmean average precision17Open CLIP ViT-B32
2D Object DetectionOVAD-Box benchmarkmean average precision17Open CLIP ViT-B32
Open Vocabulary Object DetectionOVAD-Box benchmarkmean average precision17Open CLIP ViT-B32
16kOVAD-Box benchmarkmean average precision17Open CLIP ViT-B32
Zero-Shot Image ClassificationCountry211Top-1 accuracy30.01OpenClip H/14 (34B)(Laion2B)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17