TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/BEV-CV: Birds-Eye-View Transform for Cross-View Geo-Locali...

BEV-CV: Birds-Eye-View Transform for Cross-View Geo-Localisation

Tavis Shore, Simon Hadfield, Oscar Mendez

2023-12-23Visual Localizationgeo-localizationNavigateImage-Based LocalizationOutdoor LocalizationCamera LocalizationRetrievalCross-View Geo-LocalisationImage Retrieval
PaperPDFCode(official)

Abstract

Cross-view image matching for geo-localisation is a challenging problem due to the significant visual difference between aerial and ground-level viewpoints. The method provides localisation capabilities from geo-referenced images, eliminating the need for external devices or costly equipment. This enhances the capacity of agents to autonomously determine their position, navigate, and operate effectively in GNSS-denied environments. Current research employs a variety of techniques to reduce the domain gap such as applying polar transforms to aerial images or synthesising between perspectives. However, these approaches generally rely on having a 360{\deg} field of view, limiting real-world feasibility. We propose BEV-CV, an approach introducing two key novelties with a focus on improving the real-world viability of cross-view geo-localisation. Firstly bringing ground-level images into a semantic Birds-Eye-View before matching embeddings, allowing for direct comparison with aerial image representations. Secondly, we adapt datasets into application realistic format - limited Field-of-View images aligned to vehicle direction. BEV-CV achieves state-of-the-art recall accuracies, improving Top-1 rates of 70{\deg} crops of CVUSA and CVACT by 23% and 24% respectively. Also decreasing computational requirements by reducing floating point operations to below previous works, and decreasing embedding dimensionality by 33% - together allowing for faster localisation capabilities.

Results

TaskDatasetMetricValueModel
Camera LocalizationCVUSA 90Top-133.66DSM
Camera LocalizationCVUSA 90Top-132.11BEV-CV
Camera LocalizationCVUSA 90Top-1%92.99BEV-CV
Camera LocalizationCVUSA 90Top-1069.06BEV-CV
Camera LocalizationCVUSA 90Top-558.36BEV-CV
Camera LocalizationCVUSA 90Top-125.21L2LTR
Camera LocalizationCVUSA 90Top-122.54GAL
Camera LocalizationCVUSA 90Top-121.96TransGeo [Zhu2022TransGeoTI]
Camera LocalizationCVUSA 90Top-115.21GeoDTR
Camera LocalizationCVUSA 90Top-1052.27GeoDTR
Camera LocalizationCVUSA 90Top-539.32GeoDTR
Camera LocalizationCVUSA 90Top-14.8CVFT
Camera LocalizationCVUSA 90Top-12.76CVM
Camera LocalizationCVUSA 90Top-1%88.72GeoDTR [zhang2023crossview]
Camera LocalizationCVUSA 90Top-1%86.8TransGeo
Camera LocalizationCVUSA 90Top-1056.49TransGeo
Camera LocalizationCVUSA 90Top-545.35TransGeo
Camera LocalizationCVUSA 90R@551.9L2LTR [Yang2021CrossviewGW]
Camera LocalizationCVUSA 90R@551.7DSM [Shi2020WhereAI]
Camera LocalizationCVUSA 70Top-127.4BEV-CV
Camera LocalizationCVUSA 70Top-1%90.94BEV-CV
Camera LocalizationCVUSA 70Top-1064.47BEV-CV
Camera LocalizationCVUSA 70Top-552.94BEV-CV

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17FAR-Net: Multi-Stage Fusion Network with Enhanced Semantic Alignment and Adaptive Reconciliation for Composed Image Retrieval2025-07-17Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Developing Visual Augmented Q&A System using Scalable Vision Embedding Retrieval & Late Interaction Re-ranker2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16