TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Deeper and Wider Siamese Networks for Real-Time Visual Tra...

Deeper and Wider Siamese Networks for Real-Time Visual Tracking

Zhipeng Zhang, Houwen Peng

2019-01-07CVPR 2019 6Visual Object TrackingVisual TrackingReal-Time Visual TrackingVideo Object Tracking
PaperPDFCodeCodeCodeCode(official)Code

Abstract

Siamese networks have drawn great attention in visual tracking because of their balanced accuracy and speed. However, the backbone networks used in Siamese trackers are relatively shallow, such as AlexNet [18], which does not fully take advantage of the capability of modern deep neural networks. In this paper, we investigate how to leverage deeper and wider convolutional neural networks to enhance tracking robustness and accuracy. We observe that direct replacement of backbones with existing powerful architectures, such as ResNet [14] and Inception [33], does not bring improvements. The main reasons are that 1)large increases in the receptive field of neurons lead to reduced feature discriminability and localization precision; and 2) the network padding for convolutions induces a positional bias in learning. To address these issues, we propose new residual modules to eliminate the negative impact of padding, and further design new architectures using these modules with controlled receptive field size and network stride. The designed architectures are lightweight and guarantee real-time tracking speed when applied to SiamFC [2] and SiamRPN [20]. Experiments show that solely due to the proposed network architectures, our SiamFC+ and SiamRPN+ obtain up to 9.8%/5.7% (AUC), 23.3%/8.8% (EAO) and 24.4%/25.0% (EAO) relative improvements over the original versions [2, 20] on the OTB-15, VOT-16 and VOT-17 datasets, respectively.

Results

TaskDatasetMetricValueModel
VideoNT-VOT211AUC35.18SiamDW
VideoNT-VOT211Precision46.18SiamDW
Object TrackingVOT2017Expected Average Overlap (EAO)0.3SiamRPN+
Object TrackingVOT2016Expected Average Overlap (EAO)0.37SiamRPN+
Object TrackingNT-VOT211AUC35.18SiamDW
Object TrackingNT-VOT211Precision46.18SiamDW
Visual Object TrackingVOT2017Expected Average Overlap (EAO)0.3SiamRPN+
Visual Object TrackingVOT2016Expected Average Overlap (EAO)0.37SiamRPN+

Related Papers

HiM2SAM: Enhancing SAM2 with Hierarchical Motion Estimation and Memory Optimization towards Long-term Tracking2025-07-10What You Have is What You Track: Adaptive and Robust Multimodal Tracking2025-07-08UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions2025-07-01Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking2025-06-30R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning2025-06-27Exploiting Lightweight Hierarchical ViT and Dynamic Framework for Efficient Visual Tracking2025-06-25Comparison of Two Methods for Stationary Incident Detection Based on Background Image2025-06-17Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking2025-05-31