TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/EffoVPR: Effective Foundation Model Utilization for Visual...

EffoVPR: Effective Foundation Model Utilization for Visual Place Recognition

Issar Tzachor, Boaz Lerner, Matan Levy, Michael Green, Tal Berkovitz Shalev, Gavriel Habib, Dvir Samuel, Noam Korngut Zailer, Or Shimshi, Nir Darshan, Rami Ben-Ari

2024-05-28Visual Place RecognitionRe-Ranking
PaperPDF

Abstract

The task of Visual Place Recognition (VPR) is to predict the location of a query image from a database of geo-tagged images. Recent studies in VPR have highlighted the significant advantage of employing pre-trained foundation models like DINOv2 for the VPR task. However, these models are often deemed inadequate for VPR without further fine-tuning on VPR-specific data. In this paper, we present an effective approach to harness the potential of a foundation model for VPR. We show that features extracted from self-attention layers can act as a powerful re-ranker for VPR, even in a zero-shot setting. Our method not only outperforms previous zero-shot approaches but also introduces results competitive with several supervised methods. We then show that a single-stage approach utilizing internal ViT layers for pooling can produce global features that achieve state-of-the-art performance, with impressive feature compactness down to 128D. Moreover, integrating our local foundation features for re-ranking further widens this performance gap. Our method also demonstrates exceptional robustness and generalization, setting new state-of-the-art performance, while handling challenging conditions such as occlusion, day-night transitions, and seasonal variations.

Results

TaskDatasetMetricValueModel
Visual Place RecognitionAmsterTimeRecall@165.5EffoVPR
Visual Place RecognitionNordlandRecall@195EffoVPR
Visual Place RecognitionNordlandRecall@598.6EffoVPR
Visual Place RecognitionSan Francisco Landmark DatasetRecall@193EffoVPR
Visual Place RecognitionSF-XL test v1Recall@195.5EffoVPR
Visual Place RecognitionSF-XL test v1Recall@1098.1EffoVPR
Visual Place RecognitionSt LuciaRecall@1100EffoVPR
Visual Place RecognitionSt LuciaRecall@5100EffoVPR
Visual Place RecognitionPittsburgh-30k-testRecall@193.9EffoVPR
Visual Place RecognitionPittsburgh-30k-testRecall@597.4EffoVPR
Visual Place RecognitionTokyo247Recall@198.7EffoVPR
Visual Place RecognitionTokyo247Recall@1098.7EffoVPR
Visual Place RecognitionTokyo247Recall@598.7EffoVPR
Visual Place RecognitionSF-XL test v2Recall@194.5EffoVPR
Visual Place RecognitionSF-XL test v2Recall@1097.8EffoVPR
Visual Place RecognitionSF-XL test v2Recall@598.2EffoVPR
Visual Place RecognitionMapillary valRecall@192.8EffoVPR
Visual Place RecognitionMapillary valRecall@1097.4EffoVPR
Visual Place RecognitionMapillary valRecall@597.2EffoVPR
Visual Place RecognitionMapillary testRecall@179EffoVPR
Visual Place RecognitionMapillary testRecall@1091.6EffoVPR
Visual Place RecognitionMapillary testRecall@589EffoVPR
Visual Place RecognitionEynshamRecall@191EffoVPR

Related Papers

Visual Place Recognition for Large-Scale UAV Applications2025-07-20Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17CATVis: Context-Aware Thought Visualization2025-07-15Query-Based Adaptive Aggregation for Multi-Dataset Joint Training Toward Universal Visual Place Recognition2025-07-04SAMURAI: Shape-Aware Multimodal Retrieval for 3D Object Identification2025-06-26RAG-VisualRec: An Open Resource for Vision- and Text-Enhanced Retrieval-Augmented Generation in Recommendation2025-06-25IRanker: Towards Ranking Foundation Model2025-06-25