TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Bridging the Gap Between End-to-end and Non-End-to-end Mul...

Bridging the Gap Between End-to-end and Non-End-to-end Multi-Object Tracking

Feng Yan, Weixin Luo, Yujie Zhong, Yiyang Gan, Lin Ma

2023-05-22Multi-Object TrackingObject TrackingVideo Object Tracking
PaperPDFCodeCode(official)

Abstract

Existing end-to-end Multi-Object Tracking (e2e-MOT) methods have not surpassed non-end-to-end tracking-by-detection methods. One potential reason is its label assignment strategy during training that consistently binds the tracked objects with tracking queries and then assigns the few newborns to detection queries. With one-to-one bipartite matching, such an assignment will yield unbalanced training, i.e., scarce positive samples for detection queries, especially for an enclosed scene, as the majority of the newborns come on stage at the beginning of videos. Thus, e2e-MOT will be easier to yield a tracking terminal without renewal or re-initialization, compared to other tracking-by-detection methods. To alleviate this problem, we present Co-MOT, a simple and effective method to facilitate e2e-MOT by a novel coopetition label assignment with a shadow concept. Specifically, we add tracked objects to the matching targets for detection queries when performing the label assignment for training the intermediate decoders. For query initialization, we expand each query by a set of shadow counterparts with limited disturbance to itself. With extensive ablations, Co-MOT achieves superior performance without extra costs, e.g., 69.4% HOTA on DanceTrack and 52.8% TETA on BDD100K. Impressively, Co-MOT only requires 38\% FLOPs of MOTRv2 to attain a similar performance, resulting in the 1.4$\times$ faster inference speed.

Results

TaskDatasetMetricValueModel
VideoSoccerNet-v2HOTA69.54CO-MOT
Multi-Object TrackingBDD100KAssocA56.2CO-MOT
Multi-Object TrackingBDD100KClsA63.6CO-MOT
Multi-Object TrackingBDD100KLocA38.7CO-MOT
Multi-Object TrackingBDD100KTETA52.8CO-MOT
Multi-Object TrackingMOT17AssA60.6CO-MOT
Multi-Object TrackingMOT17DetA59.5CO-MOT
Multi-Object TrackingMOT17HOTA60.1CO-MOT
Multi-Object TrackingMOT17IDF172.7CO-MOT
Multi-Object TrackingMOT17MOTA72.6CO-MOT
Multi-Object TrackingDanceTrackAssA58.9CO-MOT
Multi-Object TrackingDanceTrackDetA82.1CO-MOT
Multi-Object TrackingDanceTrackHOTA69.4CO-MOT
Multi-Object TrackingDanceTrackIDF171.9CO-MOT
Multi-Object TrackingDanceTrackMOTA91.2CO-MOT
Object TrackingBDD100KAssocA56.2CO-MOT
Object TrackingBDD100KClsA63.6CO-MOT
Object TrackingBDD100KLocA38.7CO-MOT
Object TrackingBDD100KTETA52.8CO-MOT
Object TrackingMOT17AssA60.6CO-MOT
Object TrackingMOT17DetA59.5CO-MOT
Object TrackingMOT17HOTA60.1CO-MOT
Object TrackingMOT17IDF172.7CO-MOT
Object TrackingMOT17MOTA72.6CO-MOT
Object TrackingDanceTrackAssA58.9CO-MOT
Object TrackingDanceTrackDetA82.1CO-MOT
Object TrackingDanceTrackHOTA69.4CO-MOT
Object TrackingDanceTrackIDF171.9CO-MOT
Object TrackingDanceTrackMOTA91.2CO-MOT
Object TrackingSoccerNet-v2HOTA69.54CO-MOT

Related Papers

MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association2025-07-16HiM2SAM: Enhancing SAM2 with Hierarchical Motion Estimation and Memory Optimization towards Long-term Tracking2025-07-10Robustifying 3D Perception through Least-Squares Multi-Agent Graphs Object Tracking2025-07-07UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions2025-07-01Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking2025-06-30Visual and Memory Dual Adapter for Multi-Modal Object Tracking2025-06-30R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning2025-06-27