TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MambaPlace:Text-to-Point-Cloud Cross-Modal Place Recogniti...

MambaPlace:Text-to-Point-Cloud Cross-Modal Place Recognition with Attention Mamba Mechanisms

Tianyi Shang, Zhenyu Li, Pengjie Xu, Jinwei Qiao

2024-08-28Cross-modal place recognitionVisual Place Recognition
PaperPDFCode(official)

Abstract

Vision Language Place Recognition (VLVPR) enhances robot localization performance by incorporating natural language descriptions from images. By utilizing language information, VLVPR directs robot place matching, overcoming the constraint of solely depending on vision. The essence of multimodal fusion lies in mining the complementary information between different modalities. However, general fusion methods rely on traditional neural architectures and are not well equipped to capture the dynamics of cross modal interactions, especially in the presence of complex intra modal and inter modal correlations. To this end, this paper proposes a novel coarse to fine and end to end connected cross modal place recognition framework, called MambaPlace. In the coarse localization stage, the text description and 3D point cloud are encoded by the pretrained T5 and instance encoder, respectively. They are then processed using Text Attention Mamba (TAM) and Point Clouds Mamba (PCM) for data enhancement and alignment. In the subsequent fine localization stage, the features of the text description and 3D point cloud are cross modally fused and further enhanced through cascaded Cross Attention Mamba (CCAM). Finally, we predict the positional offset from the fused text point cloud features, achieving the most accurate localization. Extensive experiments show that MambaPlace achieves improved localization accuracy on the KITTI360Pose dataset compared to the state of the art methods.

Results

TaskDatasetMetricValueModel
Visual Place RecognitionKITTI360poseLocalization Recall@1 0.45MambaPlace

Related Papers

Visual Place Recognition for Large-Scale UAV Applications2025-07-20Query-Based Adaptive Aggregation for Multi-Dataset Joint Training Toward Universal Visual Place Recognition2025-07-04Adversarial Attacks and Detection in Visual Place Recognition for Safer Robot Navigation2025-06-19Astra: Toward General-Purpose Mobile Robots via Hierarchical Multimodal Learning2025-06-06HypeVPR: Exploring Hyperbolic Space for Perspective to Equirectangular Visual Place Recognition2025-06-05TAT-VPR: Ternary Adaptive Transformer for Dynamic and Efficient Visual Place Recognition2025-05-22Place Recognition: A Comprehensive Review, Current Challenges and Future Directions2025-05-20MMS-VPR: Multimodal Street-Level Visual Place Recognition Dataset and Benchmark2025-05-18