TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Geometry-Aware Learning of Maps for Camera Localization

Geometry-Aware Learning of Maps for Camera Localization

Samarth Brahmbhatt, Jinwei Gu, Kihwan Kim, James Hays, Jan Kautz

2017-12-09CVPR 2018 6Visual LocalizationCamera Localization
PaperPDFCode(official)

Abstract

Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work. The MapNet project webpage is https://goo.gl/mRB3Au.

Results

TaskDatasetMetricValueModel
Visual LocalizationOxford Radar RobotCar (Full-6)Mean Translation Error48.21MapNet
Visual LocalizationOxford RobotCar FullMean Translation Error29.5MapNet++
Camera LocalizationOxford RobotCar FullMean Translation Error29.5MapNet++

Related Papers

Kaleidoscopic Background Attack: Disrupting Pose Estimation with Multi-Fold Radial Symmetry Textures2025-07-14Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09MatChA: Cross-Algorithm Matching with Feature Augmentation2025-06-27OracleFusion: Assisting the Decipherment of Oracle Bone Script with Structurally Constrained Semantic Typography2025-06-26Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles2025-06-18Hierarchical Image Matching for UAV Absolute Visual Localization via Semantic and Structural Constraints2025-06-11Robust Visual Localization via Semantic-Guided Multi-Scale Transformer2025-06-10Deep Learning Reforms Image Matching: A Survey and Outlook2025-06-05