TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MVSFormer++: Revealing the Devil in Transformer's Details ...

MVSFormer++: Revealing the Devil in Transformer's Details for Multi-View Stereo

Chenjie Cao, Xinlin Ren, Yanwei Fu

2024-01-22Point Clouds3D ReconstructionDepth Estimation
PaperPDFCode(official)

Abstract

Recent advancements in learning-based Multi-View Stereo (MVS) methods have prominently featured transformer-based models with attention mechanisms. However, existing approaches have not thoroughly investigated the profound influence of transformers on different MVS modules, resulting in limited depth estimation capabilities. In this paper, we introduce MVSFormer++, a method that prudently maximizes the inherent characteristics of attention to enhance various components of the MVS pipeline. Formally, our approach involves infusing cross-view information into the pre-trained DINOv2 model to facilitate MVS learning. Furthermore, we employ different attention mechanisms for the feature encoder and cost volume regularization, focusing on feature and spatial aggregations respectively. Additionally, we uncover that some design details would substantially impact the performance of transformer modules in MVS, including normalized 3D positional encoding, adaptive attention scaling, and the position of layer normalization. Comprehensive experiments on DTU, Tanks-and-Temples, BlendedMVS, and ETH3D validate the effectiveness of the proposed method. Notably, MVSFormer++ achieves state-of-the-art performance on the challenging DTU and Tanks-and-Temples benchmarks.

Results

TaskDatasetMetricValueModel
3D ReconstructionDTUAcc0.309MVSFormer++
3D ReconstructionDTUComp0.2521MVSFormer++
3D ReconstructionDTUOverall0.2805MVSFormer++
3DDTUAcc0.309MVSFormer++
3DDTUComp0.2521MVSFormer++
3DDTUOverall0.2805MVSFormer++
Point CloudsTanks and TemplesMean F1 (Advanced)41.7MVSFormer++
Point CloudsTanks and TemplesMean F1 (Intermediate)67.03MVSFormer++

Related Papers

AutoPartGen: Autogressive 3D Part Generation and Discovery2025-07-17$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17SpatialTrackerV2: 3D Point Tracking Made Easy2025-07-16BRUM: Robust 3D Vehicle Reconstruction from 360 Sparse Images2025-07-16Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15