TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LoFTR: Detector-Free Local Feature Matching with Transform...

LoFTR: Detector-Free Local Feature Matching with Transformers

Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, Xiaowei Zhou

2021-04-01CVPR 2021 1Visual LocalizationImage MatchingPose EstimationCamera Pose Estimation
PaperPDFCodeCodeCodeCode(official)

Abstract

We present a novel method for local image feature matching. Instead of performing image feature detection, description, and matching sequentially, we propose to first establish pixel-wise dense matches at a coarse level and later refine the good matches at a fine level. In contrast to dense methods that use a cost volume to search correspondences, we use self and cross attention layers in Transformer to obtain feature descriptors that are conditioned on both images. The global receptive field provided by Transformer enables our method to produce dense matches in low-texture areas, where feature detectors usually struggle to produce repeatable interest points. The experiments on indoor and outdoor datasets show that LoFTR outperforms state-of-the-art methods by a large margin. LoFTR also ranks first on two public benchmarks of visual localization among the published methods.

Results

TaskDatasetMetricValueModel
Visual LocalizationAachen Day-Night v1.1 BenchmarkAcc@0.25m, 2°78.5LoFTR
Visual LocalizationAachen Day-Night v1.1 BenchmarkAcc@0.5m, 5°90.6LoFTR
Visual LocalizationAachen Day-Night v1.1 BenchmarkAcc@5m, 10°99LoFTR
Pose EstimationInLocDUC1-Acc@0.25m,10°47.5LoFTR
Pose EstimationInLocDUC1-Acc@0.5m,10°72.2LoFTR
Pose EstimationInLocDUC1-Acc@1.0m,10°84.8LoFTR
Pose EstimationInLocDUC2-Acc@0.25m,10°54.2LoFTR
Pose EstimationInLocDUC2-Acc@0.5m,10°74.8LoFTR
Pose EstimationInLocDUC2-Acc@1.0m,10°82.5LoFTR
Image MatchingZEBMean AUC@5°33.1LoFTR
3DInLocDUC1-Acc@0.25m,10°47.5LoFTR
3DInLocDUC1-Acc@0.5m,10°72.2LoFTR
3DInLocDUC1-Acc@1.0m,10°84.8LoFTR
3DInLocDUC2-Acc@0.25m,10°54.2LoFTR
3DInLocDUC2-Acc@0.5m,10°74.8LoFTR
3DInLocDUC2-Acc@1.0m,10°82.5LoFTR
1 Image, 2*2 StitchiInLocDUC1-Acc@0.25m,10°47.5LoFTR
1 Image, 2*2 StitchiInLocDUC1-Acc@0.5m,10°72.2LoFTR
1 Image, 2*2 StitchiInLocDUC1-Acc@1.0m,10°84.8LoFTR
1 Image, 2*2 StitchiInLocDUC2-Acc@0.25m,10°54.2LoFTR
1 Image, 2*2 StitchiInLocDUC2-Acc@0.5m,10°74.8LoFTR
1 Image, 2*2 StitchiInLocDUC2-Acc@1.0m,10°82.5LoFTR

Related Papers

$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Revisiting Reliability in the Reasoning-based Pose Estimation Benchmark2025-07-17DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model2025-07-17From Neck to Head: Bio-Impedance Sensing for Head Pose Estimation2025-07-17AthleticsPose: Authentic Sports Motion Dataset on Athletic Field and Evaluation of Monocular 3D Pose Estimation Ability2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16SGLoc: Semantic Localization System for Camera Pose Estimation from 3D Gaussian Splatting Representation2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16