TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Generative-based Fusion Mechanism for Multi-Modal Tracking

Generative-based Fusion Mechanism for Multi-Modal Tracking

Zhangyong Tang, Tianyang Xu, XueFeng Zhu, Xiao-Jun Wu, Josef Kittler

2023-09-04Rgb-T Tracking
PaperPDFCode(official)

Abstract

Generative models (GMs) have received increasing research interest for their remarkable capacity to achieve comprehensive understanding. However, their potential application in the domain of multi-modal tracking has remained relatively unexplored. In this context, we seek to uncover the potential of harnessing generative techniques to address the critical challenge, information fusion, in multi-modal tracking. In this paper, we delve into two prominent GM techniques, namely, Conditional Generative Adversarial Networks (CGANs) and Diffusion Models (DMs). Different from the standard fusion process where the features from each modality are directly fed into the fusion block, we condition these multi-modal features with random noise in the GM framework, effectively transforming the original training samples into harder instances. This design excels at extracting discriminative clues from the features, enhancing the ultimate tracking performance. To quantitatively gauge the effectiveness of our approach, we conduct extensive experiments across two multi-modal tracking tasks, three baseline methods, and three challenging benchmarks. The experimental results demonstrate that the proposed generative-based fusion mechanism achieves state-of-the-art performance, setting new records on LasHeR and RGBD1K.

Results

TaskDatasetMetricValueModel
Visual TrackingLasHeRPrecision70.7GMMT
Visual TrackingLasHeRSuccess56.6GMMT
Visual TrackingRGBT234Precision87.9GMMT
Visual TrackingRGBT234Success64.7GMMT

Related Papers

Lightweight RGB-T Tracking with Mobile Vision Transformers2025-06-23Modality-Guided Dynamic Graph Fusion and Temporal Diffusion for Self-Supervised RGB-T Tracking2025-05-06Breaking Shallow Limits: Task-Driven Pixel Fusion for Gap-free RGBT Tracking2025-03-14Adaptive Perception for Unified Visual Multi-modal Object Tracking2025-02-10BTMTrack: Robust RGB-T Tracking via Dual-template Bridging and Temporal-Modal Candidate Elimination2025-01-07PURA: Parameter Update-Recovery Test-Time Adaption for RGB-T Tracking2025-01-01SUTrack: Towards Simple and Unified Single Object Tracking2024-12-26Exploiting Multimodal Spatial-temporal Patterns for Video Object Tracking2024-12-20