TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Weakly Aligned Cross-Modal Learning for Multispectral Pede...

Weakly Aligned Cross-Modal Learning for Multispectral Pedestrian Detection

Lu Zhang, Xiangyu Zhu, Xiangyu Chen, Xu Yang, Zhen Lei, Zhi-Yong Liu

2019-01-09ICCV 2019 10Multispectral Object Detection2D Object Detection
PaperPDF

Abstract

Multispectral pedestrian detection has shown great advantages under poor illumination conditions, since the thermal modality provides complementary information for the color image. However, real multispectral data suffers from the position shift problem, i.e. the color-thermal image pairs are not strictly aligned, making one object has different positions in different modalities. In deep learning based methods, this problem makes it difficult to fuse the feature maps from both modalities and puzzles the CNN training. In this paper, we propose a novel Aligned Region CNN (AR-CNN) to handle the weakly aligned multispectral data in an end-to-end way. Firstly, we design a Region Feature Alignment (RFA) module to capture the position shift and adaptively align the region features of the two modalities. Secondly, we present a new multimodal fusion method, which performs feature re-weighting to select more reliable features and suppress the useless ones. Besides, we propose a novel RoI jitter strategy to improve the robustness to unexpected shift patterns of different devices and system settings. Finally, since our method depends on a new kind of labelling: bounding boxes that match each modality, we manually relabel the KAIST dataset by locating bounding boxes in both modalities and building their relationships, providing a new KAIST-Paired Annotation. Extensive experimental validations on existing datasets are performed, demonstrating the effectiveness and robustness of the proposed method. Code and data are available at https://github.com/luzhang16/AR-CNN.

Results

TaskDatasetMetricValueModel
2D Object DetectionDroneVehicleVal/mAP5071.6AR-CNN
Multispectral Object DetectionKAIST Multispectral Pedestrian Detection BenchmarkAll Miss Rate34.95AR-CNN

Related Papers

YOLOv11-RGBT: Towards a Comprehensive Single-Stage Multispectral Object Detection Framework2025-06-17Multispectral Detection Transformer with Infrared-Centric Sensor Fusion2025-05-21VisionReasoner: Unified Visual Perception and Reasoning via Reinforcement Learning2025-05-17GATE3D: Generalized Attention-based Task-synergized Estimation in 3D*2025-04-15Safe-Construct: Redefining Construction Safety Violation Recognition as 3D Multi-View Engagement Task2025-04-15Distributed LLMs and Multimodal Large Language Models: A Survey on Advances, Challenges, and Future Directions2025-03-202D Object Detection: A Survey2025-03-07AI-Driven Relocation Tracking in Dynamic Kitchen Environments2025-03-03