TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/UCMCTrack: Multi-Object Tracking with Uniform Camera Motio...

UCMCTrack: Multi-Object Tracking with Uniform Camera Motion Compensation

Kefu Yi, Kai Luo, Xiaolei Luo, Jiangui Huang, Hao Wu, Rongdong Hu, Wei Hao

2023-12-14Motion CompensationMulti-Object TrackingObject TrackingMultiple Object Tracking
PaperPDFCode(official)

Abstract

Multi-object tracking (MOT) in video sequences remains a challenging task, especially in scenarios with significant camera movements. This is because targets can drift considerably on the image plane, leading to erroneous tracking outcomes. Addressing such challenges typically requires supplementary appearance cues or Camera Motion Compensation (CMC). While these strategies are effective, they also introduce a considerable computational burden, posing challenges for real-time MOT. In response to this, we introduce UCMCTrack, a novel motion model-based tracker robust to camera movements. Unlike conventional CMC that computes compensation parameters frame-by-frame, UCMCTrack consistently applies the same compensation parameters throughout a video sequence. It employs a Kalman filter on the ground plane and introduces the Mapped Mahalanobis Distance (MMD) as an alternative to the traditional Intersection over Union (IoU) distance measure. By leveraging projected probability distributions on the ground plane, our approach efficiently captures motion patterns and adeptly manages uncertainties introduced by homography projections. Remarkably, UCMCTrack, relying solely on motion cues, achieves state-of-the-art performance across a variety of challenging datasets, including MOT17, MOT20, DanceTrack and KITTI. More details and code are available at https://github.com/corfyi/UCMCTrack

Results

TaskDatasetMetricValueModel
VideoKITTI Test (Online Methods)HOTA77.1UCMCTrack
VideoKITTI Test (Online Methods)MOTA90.4UCMCTrack
Multi-Object TrackingMOT20HOTA62.8UCMCTrack
Multi-Object TrackingMOT20IDF177.4UCMCTrack
Multi-Object TrackingMOT17HOTA65.8UCMCTrack
Multi-Object TrackingMOT17IDF181.1UCMCTrack
Multi-Object TrackingMOT17MOTA80.5UCMCTrack
Multi-Object TrackingDanceTrackHOTA63.6UCMCTrack
Multi-Object TrackingDanceTrackIDF165UCMCTrack
Object TrackingMOT20HOTA62.8UCMCTrack
Object TrackingMOT20IDF177.4UCMCTrack
Object TrackingMOT17HOTA65.8UCMCTrack
Object TrackingMOT17IDF181.1UCMCTrack
Object TrackingMOT17MOTA80.5UCMCTrack
Object TrackingDanceTrackHOTA63.6UCMCTrack
Object TrackingDanceTrackIDF165UCMCTrack
Object TrackingKITTI Test (Online Methods)HOTA77.1UCMCTrack
Object TrackingKITTI Test (Online Methods)MOTA90.4UCMCTrack
Multiple Object TrackingKITTI Test (Online Methods)HOTA77.1UCMCTrack
Multiple Object TrackingKITTI Test (Online Methods)MOTA90.4UCMCTrack

Related Papers

MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association2025-07-16HiM2SAM: Enhancing SAM2 with Hierarchical Motion Estimation and Memory Optimization towards Long-term Tracking2025-07-10Robustifying 3D Perception through Least-Squares Multi-Agent Graphs Object Tracking2025-07-07UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions2025-07-01Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking2025-06-30Visual and Memory Dual Adapter for Multi-Modal Object Tracking2025-06-30R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning2025-06-27