TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/High-Performance Long-Term Tracking with Meta-Updater

High-Performance Long-Term Tracking with Meta-Updater

Kenan Dai, Yunhua Zhang, Dong Wang, Jianhua Li, Huchuan Lu, Xiaoyun Yang

2020-04-01CVPR 2020 6Visual Object TrackingVisual TrackingVocal Bursts Intensity Prediction
PaperPDFCode(official)Code(official)

Abstract

Long-term visual tracking has drawn increasing attention because it is much closer to practical applications than short-term tracking. Most top-ranked long-term trackers adopt the offline-trained Siamese architectures, thus, they cannot benefit from great progress of short-term trackers with online update. However, it is quite risky to straightforwardly introduce online-update-based trackers to solve the long-term problem, due to long-term uncertain and noisy observations. In this work, we propose a novel offline-trained Meta-Updater to address an important but unsolved problem: Is the tracker ready for updating in the current frame? The proposed meta-updater can effectively integrate geometric, discriminative, and appearance cues in a sequential manner, and then mine the sequential information with a designed cascaded LSTM module. Our meta-updater learns a binary output to guide the tracker's update and can be easily embedded into different trackers. This work also introduces a long-term tracking framework consisting of an online local tracker, an online verifier, a SiamRPN-based re-detector, and our meta-updater. Numerous experimental results on the VOT2018LT, VOT2019LT, OxUvALT, TLP, and LaSOT benchmarks show that our tracker performs remarkably better than other competing algorithms. Our project is available on the website: https://github.com/Daikenan/LTMU.

Results

TaskDatasetMetricValueModel
Object TrackingLaSOT-extAUC41.4LTMU
Object TrackingLaSOT-extNormalized Precision49.9LTMU
Object TrackingLaSOT-extPrecision47.3LTMU
Visual Object TrackingLaSOT-extAUC41.4LTMU
Visual Object TrackingLaSOT-extNormalized Precision49.9LTMU
Visual Object TrackingLaSOT-extPrecision47.3LTMU

Related Papers

What You Have is What You Track: Adaptive and Robust Multimodal Tracking2025-07-08UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions2025-07-01Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking2025-06-30R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning2025-06-27Exploiting Lightweight Hierarchical ViT and Dynamic Framework for Efficient Visual Tracking2025-06-25Comparison of Two Methods for Stationary Incident Detection Based on Background Image2025-06-17Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking2025-05-31TrackVLA: Embodied Visual Tracking in the Wild2025-05-29