TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Pair-VPR: Place-Aware Pre-training and Contrastive Pair Cl...

Pair-VPR: Place-Aware Pre-training and Contrastive Pair Classification for Visual Place Recognition with Vision Transformers

Stephen Hausler, Peyman Moghadam

2024-10-09Visual Place RecognitionRe-Ranking
PaperPDFCode(official)

Abstract

In this work we propose a novel joint training method for Visual Place Recognition (VPR), which simultaneously learns a global descriptor and a pair classifier for re-ranking. The pair classifier can predict whether a given pair of images are from the same place or not. The network only comprises Vision Transformer components for both the encoder and the pair classifier, and both components are trained using their respective class tokens. In existing VPR methods, typically the network is initialized using pre-trained weights from a generic image dataset such as ImageNet. In this work we propose an alternative pre-training strategy, by using Siamese Masked Image Modelling as a pre-training task. We propose a Place-aware image sampling procedure from a collection of large VPR datasets for pre-training our model, to learn visual features tuned specifically for VPR. By re-using the Mask Image Modelling encoder and decoder weights in the second stage of training, Pair-VPR can achieve state-of-the-art VPR performance across five benchmark datasets with a ViT-B encoder, along with further improvements in localization recall with larger encoders. The Pair-VPR website is: https://csiro-robotics.github.io/Pair-VPR.

Results

TaskDatasetMetricValueModel
Visual Place RecognitionPittsburgh-30k-testRecall@195.4Pair-VPR-p
Visual Place RecognitionPittsburgh-30k-testRecall@1098Pair-VPR-p
Visual Place RecognitionPittsburgh-30k-testRecall@597.5Pair-VPR-p
Visual Place RecognitionPittsburgh-30k-testRecall@194.7Pair-VPR-s
Visual Place RecognitionPittsburgh-30k-testRecall@1097.8Pair-VPR-s
Visual Place RecognitionPittsburgh-30k-testRecall@597.2Pair-VPR-s
Visual Place RecognitionTokyo247Recall@1100Pair-VPR-p
Visual Place RecognitionTokyo247Recall@10100Pair-VPR-p
Visual Place RecognitionTokyo247Recall@5100Pair-VPR-p
Visual Place RecognitionTokyo247Recall@198.1Pair-VPR-s
Visual Place RecognitionTokyo247Recall@1098.7Pair-VPR-s
Visual Place RecognitionTokyo247Recall@598.4Pair-VPR-s
Visual Place RecognitionMapillary valRecall@195.4Pair-VPR-p
Visual Place RecognitionMapillary valRecall@1097.7Pair-VPR-p
Visual Place RecognitionMapillary valRecall@597.3Pair-VPR-p
Visual Place RecognitionMapillary valRecall@193.7Pair-VPR-s
Visual Place RecognitionMapillary valRecall@1097.3Pair-VPR-s
Visual Place RecognitionMapillary valRecall@597.2Pair-VPR-s
Visual Place RecognitionMapillary testRecall@181.7Pair-VPR-p
Visual Place RecognitionMapillary testRecall@1091.3Pair-VPR-p
Visual Place RecognitionMapillary testRecall@590.2Pair-VPR-p
Visual Place RecognitionMapillary testRecall@179Pair-VPR-s
Visual Place RecognitionMapillary testRecall@1088.3Pair-VPR-s
Visual Place RecognitionMapillary testRecall@586.9Pair-VPR-s

Related Papers

Visual Place Recognition for Large-Scale UAV Applications2025-07-20Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17CATVis: Context-Aware Thought Visualization2025-07-15Query-Based Adaptive Aggregation for Multi-Dataset Joint Training Toward Universal Visual Place Recognition2025-07-04SAMURAI: Shape-Aware Multimodal Retrieval for 3D Object Identification2025-06-26RAG-VisualRec: An Open Resource for Vision- and Text-Enhanced Retrieval-Augmented Generation in Recommendation2025-06-25IRanker: Towards Ranking Foundation Model2025-06-25