TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Detection Recovery in Online Multi-Object Tracking with Sp...

Detection Recovery in Online Multi-Object Tracking with Sparse Graph Tracker

Jeongseok Hyun, Myunggu Kang, Dongyoon Wee, Dit-yan Yeung

2022-05-02motion predictionMulti-Object TrackingObject TrackingOnline Multi-Object Trackingobject-detectionObject Detection
PaperPDFCode(official)

Abstract

In existing joint detection and tracking methods, pairwise relational features are used to match previous tracklets to current detections. However, the features may not be discriminative enough for a tracker to identify a target from a large number of detections. Selecting only high-scored detections for tracking may lead to missed detections whose confidence score is low. Consequently, in the online setting, this results in disconnections of tracklets which cannot be recovered. In this regard, we present Sparse Graph Tracker (SGT), a novel online graph tracker using higher-order relational features which are more discriminative by aggregating the features of neighboring detections and their relations. SGT converts video data into a graph where detections, their connections, and the relational features of two connected nodes are represented by nodes, edges, and edge features, respectively. The strong edge features allow SGT to track targets with tracking candidates selected by top-K scored detections with large K. As a result, even low-scored detections can be tracked, and the missed detections are also recovered. The robustness of K value is shown through the extensive experiments. In the MOT16/17/20 and HiEve Challenge, SGT outperforms the state-of-the-art trackers with real-time inference speed. Especially, a large improvement in MOTA is shown in the MOT20 and HiEve Challenge. Code is available at https://github.com/HYUNJS/SGT.

Results

TaskDatasetMetricValueModel
Multi-Object TrackingHiEveIDF153.7SGT
Multi-Object TrackingHiEveMOTA47.2SGT
Multi-Object TrackingMOT20HOTA57SGT
Multi-Object TrackingMOT20IDF170.6SGT
Multi-Object TrackingMOT20MOTA72.8SGT
Multi-Object TrackingMOT17HOTA60.8SGT
Multi-Object TrackingMOT17IDF172.8SGT
Multi-Object TrackingMOT17MOTA76.4SGT
Multi-Object TrackingMOT16IDF173.5SGT
Multi-Object TrackingMOT16MOTA76.8SGT
Object TrackingHiEveIDF153.7SGT
Object TrackingHiEveMOTA47.2SGT
Object TrackingMOT20HOTA57SGT
Object TrackingMOT20IDF170.6SGT
Object TrackingMOT20MOTA72.8SGT
Object TrackingMOT17HOTA60.8SGT
Object TrackingMOT17IDF172.8SGT
Object TrackingMOT17MOTA76.4SGT
Object TrackingMOT16IDF173.5SGT
Object TrackingMOT16MOTA76.8SGT

Related Papers

MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17Decoupled PROB: Decoupled Query Initialization Tasks and Objectness-Class Learning for Open World Object Detection2025-07-17Dual LiDAR-Based Traffic Movement Count Estimation at a Signalized Intersection: Deployment, Data Collection, and Preliminary Analysis2025-07-17YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping2025-07-15