TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An Empirical Study of CLIP for Text-based Person Search

An Empirical Study of CLIP for Text-based Person Search

Min Cao, Yang Bai, Ziyin Zeng, Mang Ye, Min Zhang

2023-08-19Cross-Modal RetrievalPerson SearchModel CompressionData AugmentationRetrievalText based Person SearchText based Person Retrieval
PaperPDFCode(official)

Abstract

Text-based Person Search (TBPS) aims to retrieve the person images using natural language descriptions. Recently, Contrastive Language Image Pretraining (CLIP), a universal large cross-modal vision-language pre-training model, has remarkably performed over various cross-modal downstream tasks due to its powerful cross-modal semantic learning capacity. TPBS, as a fine-grained cross-modal retrieval task, is also facing the rise of research on the CLIP-based TBPS. In order to explore the potential of the visual-language pre-training model for downstream TBPS tasks, this paper makes the first attempt to conduct a comprehensive empirical study of CLIP for TBPS and thus contribute a straightforward, incremental, yet strong TBPS-CLIP baseline to the TBPS community. We revisit critical design considerations under CLIP, including data augmentation and loss function. The model, with the aforementioned designs and practical training tricks, can attain satisfactory performance without any sophisticated modules. Also, we conduct the probing experiments of TBPS-CLIP in model generalization and model compression, demonstrating the effectiveness of TBPS-CLIP from various aspects. This work is expected to provide empirical insights and highlight future CLIP-based TBPS research.

Results

TaskDatasetMetricValueModel
Text based Person RetrievalICFG-PEDESR@165.05TBPS-CLIP (ViT-B/16)
Text based Person RetrievalICFG-PEDESR@1085.47TBPS-CLIP (ViT-B/16)
Text based Person RetrievalICFG-PEDESR@580.34TBPS-CLIP (ViT-B/16)
Text based Person RetrievalICFG-PEDESmAP39.83TBPS-CLIP (ViT-B/16)
Text based Person RetrievalRSTPReidR@161.95TBPS-CLIP (ViT-B/16)
Text based Person RetrievalRSTPReidR@1088.75TBPS-CLIP (ViT-B/16)
Text based Person RetrievalRSTPReidR@583.55TBPS-CLIP (ViT-B/16)
Text based Person RetrievalRSTPReidmAP48.26TBPS-CLIP (ViT-B/16)

Related Papers

LINR-PCGC: Lossless Implicit Neural Representations for Point Cloud Geometry Compression2025-07-21Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16