TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/RTracker: Recoverable Tracking via PN Tree Structured Memory

RTracker: Recoverable Tracking via PN Tree Structured Memory

Yuqing Huang, Xin Li, Zikun Zhou, YaoWei Wang, Zhenyu He, Ming-Hsuan Yang

2024-03-28CVPR 2024 1Visual Object TrackingVisual Tracking
PaperPDFCode(official)

Abstract

Existing tracking methods mainly focus on learning better target representation or developing more robust prediction models to improve tracking performance. While tracking performance has significantly improved, the target loss issue occurs frequently due to tracking failures, complete occlusion, or out-of-view situations. However, considerably less attention is paid to the self-recovery issue of tracking methods, which is crucial for practical applications. To this end, we propose a recoverable tracking framework, RTracker, that uses a tree-structured memory to dynamically associate a tracker and a detector to enable self-recovery ability. Specifically, we propose a Positive-Negative Tree-structured memory to chronologically store and maintain positive and negative target samples. Upon the PN tree memory, we develop corresponding walking rules for determining the state of the target and define a set of control flows to unite the tracker and the detector in different tracking scenarios. Our core idea is to use the support samples of positive and negative target categories to establish a relative distance-based criterion for a reliable assessment of target loss. The favorable performance in comparison against the state-of-the-art methods on numerous challenging benchmarks demonstrates the effectiveness of the proposed algorithm.

Results

TaskDatasetMetricValueModel
Object TrackingTNL2KAUC60.6RTracker-L
Object TrackingTNL2Kprecision63.7RTracker-L
Object TrackingLaSOTAUC74.7RTracker-L
Object TrackingLaSOTNormalized Precision84.5RTracker-L
Object TrackingGOT-10kAverage Overlap77.9RTracker-L
Object TrackingGOT-10kSuccess Rate 0.587RTracker-L
Object TrackingGOT-10kSuccess Rate 0.7576.9RTracker-L
Object TrackingLaSOT-extAUC54.9RTracker-L
Object TrackingLaSOT-extNormalized Precision65.5RTracker-L
Object TrackingLaSOT-extPrecision62.7RTracker-L
Object TrackingVideoCubeNormalized Precision81.5RTracker-L
Object TrackingVideoCubePrecision63.2RTracker-L
Object TrackingVideoCubeSuccess Rate69.6RTracker-L
Visual Object TrackingTNL2KAUC60.6RTracker-L
Visual Object TrackingTNL2Kprecision63.7RTracker-L
Visual Object TrackingLaSOTAUC74.7RTracker-L
Visual Object TrackingLaSOTNormalized Precision84.5RTracker-L
Visual Object TrackingGOT-10kAverage Overlap77.9RTracker-L
Visual Object TrackingGOT-10kSuccess Rate 0.587RTracker-L
Visual Object TrackingGOT-10kSuccess Rate 0.7576.9RTracker-L
Visual Object TrackingLaSOT-extAUC54.9RTracker-L
Visual Object TrackingLaSOT-extNormalized Precision65.5RTracker-L
Visual Object TrackingLaSOT-extPrecision62.7RTracker-L
Visual Object TrackingVideoCubeNormalized Precision81.5RTracker-L
Visual Object TrackingVideoCubePrecision63.2RTracker-L
Visual Object TrackingVideoCubeSuccess Rate69.6RTracker-L

Related Papers

What You Have is What You Track: Adaptive and Robust Multimodal Tracking2025-07-08UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions2025-07-01Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking2025-06-30R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning2025-06-27Exploiting Lightweight Hierarchical ViT and Dynamic Framework for Efficient Visual Tracking2025-06-25Comparison of Two Methods for Stationary Incident Detection Based on Background Image2025-06-17Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking2025-05-31TrackVLA: Embodied Visual Tracking in the Wild2025-05-29