TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Cross-modal Orthogonal High-rank Augmentation for RGB-Even...

Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers

Zhiyu Zhu, Junhui Hou, Dapeng Oliver Wu

2023-07-09ICCV 2023 1Object Tracking
PaperPDFCodeCode(official)

Abstract

This paper addresses the problem of cross-modal object tracking from RGB videos and event data. Rather than constructing a complex cross-modal fusion network, we explore the great potential of a pre-trained vision Transformer (ViT). Particularly, we delicately investigate plug-and-play training augmentations that encourage the ViT to bridge the vast distribution gap between the two modalities, enabling comprehensive cross-modal information interaction and thus enhancing its ability. Specifically, we propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively. To mitigate network oscillations resulting from the masking strategy and further amplify its positive effect, we then theoretically propose an orthogonal high-rank loss to regularize the attention matrix. Extensive experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and twostream trackers to a large extent in terms of both tracking precision and success rate. Our new perspective and findings will potentially bring insights to the field of leveraging powerful pre-trained ViTs to model cross-modal data. The code will be publicly available.

Results

TaskDatasetMetricValueModel
Object TrackingCOESOTPrecision Rate73.8HR-CEUTrack-Large
Object TrackingCOESOTSuccess Rate65HR-CEUTrack-Large
Object TrackingCOESOTPrecision Rate71.9HR-CEUTrack-Base
Object TrackingCOESOTSuccess Rate63.2HR-CEUTrack-Base
Object TrackingFE108Averaged Precision96.2HR-MonTrack-Base
Object TrackingFE108Success Rate68.5HR-MonTrack-Base
Object TrackingFE108Averaged Precision95.3HR-MonTrack-Tiny
Object TrackingFE108Success Rate66.3HR-MonTrack-Tiny

Related Papers

MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association2025-07-16HiM2SAM: Enhancing SAM2 with Hierarchical Motion Estimation and Memory Optimization towards Long-term Tracking2025-07-10Robustifying 3D Perception through Least-Squares Multi-Agent Graphs Object Tracking2025-07-07UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions2025-07-01Mamba-FETrack V2: Revisiting State Space Model for Frame-Event based Visual Object Tracking2025-06-30Visual and Memory Dual Adapter for Multi-Modal Object Tracking2025-06-30R1-Track: Direct Application of MLLMs to Visual Object Tracking via Reinforcement Learning2025-06-27