TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Pedestrian Detection by Exemplar-Guided Contrastive Learning

Pedestrian Detection by Exemplar-Guided Contrastive Learning

Zebin Lin, Wenjie Pei, Fanglin Chen, David Zhang, Guangming Lu

2021-11-17Contrastive LearningPedestrian Detection
PaperPDF

Abstract

Typical methods for pedestrian detection focus on either tackling mutual occlusions between crowded pedestrians, or dealing with the various scales of pedestrians. Detecting pedestrians with substantial appearance diversities such as different pedestrian silhouettes, different viewpoints or different dressing, remains a crucial challenge. Instead of learning each of these diverse pedestrian appearance features individually as most existing methods do, we propose to perform contrastive learning to guide the feature learning in such a way that the semantic distance between pedestrians with different appearances in the learned feature space is minimized to eliminate the appearance diversities, whilst the distance between pedestrians and background is maximized. To facilitate the efficiency and effectiveness of contrastive learning, we construct an exemplar dictionary with representative pedestrian appearances as prior knowledge to construct effective contrastive training pairs and thus guide contrastive learning. Besides, the constructed exemplar dictionary is further leveraged to evaluate the quality of pedestrian proposals during inference by measuring the semantic distance between the proposal and the exemplar dictionary. Extensive experiments on both daytime and nighttime pedestrian detection validate the effectiveness of the proposed method.

Results

TaskDatasetMetricValueModel
Autonomous VehiclesTJU-Ped-trafficALL (miss rate)35.76EGCL
Autonomous VehiclesTJU-Ped-trafficHO (miss rate)60.05EGCL
Autonomous VehiclesTJU-Ped-trafficR (miss rate)19.73EGCL
Autonomous VehiclesTJU-Ped-trafficR+HO (miss rate)24.19EGCL
Autonomous VehiclesTJU-Ped-campusALL (miss rate)34.87EGCL
Autonomous VehiclesTJU-Ped-campusHO (miss rate)65.27EGCL
Autonomous VehiclesTJU-Ped-campusR (miss rate)24.84EGCL
Autonomous VehiclesTJU-Ped-campusR+HO (miss rate)32.39EGCL
Pedestrian DetectionTJU-Ped-trafficALL (miss rate)35.76EGCL
Pedestrian DetectionTJU-Ped-trafficHO (miss rate)60.05EGCL
Pedestrian DetectionTJU-Ped-trafficR (miss rate)19.73EGCL
Pedestrian DetectionTJU-Ped-trafficR+HO (miss rate)24.19EGCL
Pedestrian DetectionTJU-Ped-campusALL (miss rate)34.87EGCL
Pedestrian DetectionTJU-Ped-campusHO (miss rate)65.27EGCL
Pedestrian DetectionTJU-Ped-campusR (miss rate)24.84EGCL
Pedestrian DetectionTJU-Ped-campusR+HO (miss rate)32.39EGCL

Related Papers

SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15Latent Space Consistency for Sparse-View CT Reconstruction2025-07-15Self-supervised pretraining of vision transformers for animal behavioral analysis and neural encoding2025-07-13