TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Chinese CLIP: Contrastive Vision-Language Pretraining in C...

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou

2022-11-02Image ClassificationRepresentation LearningZero-Shot Image ClassificationZero-shot Text-to-Image RetrievalContrastive LearningZero-shot Text RetrievalRetrievalZero-Shot LearningZero-shot Image RetrievalImage Retrieval
PaperPDFCode(official)

Abstract

The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). We have released our codes, models, and demos in https://github.com/OFA-Sys/Chinese-CLIP

Results

TaskDatasetMetricValueModel
Image RetrievalMUGE RetrievalMean Recall83.6CN-CLIP (ViT-H/14)
Image RetrievalMUGE RetrievalR@168.9CN-CLIP (ViT-H/14)
Image RetrievalMUGE RetrievalR@1093.1CN-CLIP (ViT-H/14)
Image RetrievalMUGE RetrievalR@588.7CN-CLIP (ViT-H/14)
Image RetrievalMUGE RetrievalMean Recall81.3CN-CLIP (ViT-L/14@336px)
Image RetrievalMUGE RetrievalR@165.3CN-CLIP (ViT-L/14@336px)
Image RetrievalMUGE RetrievalR@1092.1CN-CLIP (ViT-L/14@336px)
Image RetrievalMUGE RetrievalR@586.7CN-CLIP (ViT-L/14@336px)
Image RetrievalMUGE RetrievalMean Recall80.1CN-CLIP (ViT-L/14)
Image RetrievalMUGE RetrievalR@163.3CN-CLIP (ViT-L/14)
Image RetrievalMUGE RetrievalR@1091.3CN-CLIP (ViT-L/14)
Image RetrievalMUGE RetrievalR@585.6CN-CLIP (ViT-L/14)
Image RetrievalMUGE RetrievalMean Recall77.4CN-CLIP (ViT-B/16)
Image RetrievalMUGE RetrievalR@158.4CN-CLIP (ViT-B/16)
Image RetrievalMUGE RetrievalR@1090CN-CLIP (ViT-B/16)
Image RetrievalMUGE RetrievalR@583.6CN-CLIP (ViT-B/16)
Image RetrievalMUGE RetrievalMean Recall69.2CN-CLIP (RN50)
Image RetrievalMUGE RetrievalR@148.6CN-CLIP (RN50)
Image RetrievalMUGE RetrievalR@1084CN-CLIP (RN50)
Image RetrievalMUGE RetrievalR@575.1CN-CLIP (RN50)
Image RetrievalFlickr30k-CNR@184.4CN-CLIP (ViT-L/14@336px)
Image RetrievalFlickr30k-CNR@1098.7CN-CLIP (ViT-L/14@336px)
Image RetrievalFlickr30k-CNR@597.1CN-CLIP (ViT-L/14@336px)
Image RetrievalFlickr30k-CNR@183.8CN-CLIP (ViT-H/14)
Image RetrievalFlickr30k-CNR@1098.6CN-CLIP (ViT-H/14)
Image RetrievalFlickr30k-CNR@596.9CN-CLIP (ViT-H/14)
Image RetrievalFlickr30k-CNR@182.7CN-CLIP (ViT-L/14)
Image RetrievalFlickr30k-CNR@1098.6CN-CLIP (ViT-L/14)
Image RetrievalFlickr30k-CNR@596.7CN-CLIP (ViT-L/14)
Image RetrievalFlickr30k-CNR@179.1CN-CLIP (ViT-B/16)
Image RetrievalFlickr30k-CNR@1097.4CN-CLIP (ViT-B/16)
Image RetrievalFlickr30k-CNR@594.8CN-CLIP (ViT-B/16)
Image RetrievalFlickr30k-CNR@166.7CN-CLIP (RN50)
Image RetrievalFlickr30k-CNR@1094.1CN-CLIP (RN50)
Image RetrievalFlickr30k-CNR@589.4CN-CLIP (RN50)
Image RetrievalCOCO-CNR@181.5CN-CLIP (ViT-H/14)
Image RetrievalCOCO-CNR@1099.1CN-CLIP (ViT-H/14)
Image RetrievalCOCO-CNR@596.9CN-CLIP (ViT-H/14)
Image RetrievalCOCO-CNR@180.1CN-CLIP (ViT-L/14@336px)
Image RetrievalCOCO-CNR@1099.2CN-CLIP (ViT-L/14@336px)
Image RetrievalCOCO-CNR@596.7CN-CLIP (ViT-L/14@336px)
Image RetrievalCOCO-CNR@178.9CN-CLIP (ViT-L/14)
Image RetrievalCOCO-CNR@1099CN-CLIP (ViT-L/14)
Image RetrievalCOCO-CNR@596.3CN-CLIP (ViT-L/14)
Image RetrievalCOCO-CNR@177CN-CLIP (ViT-B/16)
Image RetrievalCOCO-CNR@1099CN-CLIP (ViT-B/16)
Image RetrievalCOCO-CNR@597.1CN-CLIP (ViT-B/16)
Image RetrievalCOCO-CNR@166.8CN-CLIP (RN50)
Image RetrievalCOCO-CNR@1097CN-CLIP (RN50)
Image RetrievalCOCO-CNR@591.1CN-CLIP (RN50)

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17