TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Depth estimation from 4D light field videos

Depth estimation from 4D light field videos

Takahiro Kinoshita, Satoshi Ono

2020-12-05Disparity EstimationDepth Estimation
PaperPDFCode(official)

Abstract

Depth (disparity) estimation from 4D Light Field (LF) images has been a research topic for the last couple of years. Most studies have focused on depth estimation from static 4D LF images while not considering temporal information, i.e., LF videos. This paper proposes an end-to-end neural network architecture for depth estimation from 4D LF videos. This study also constructs a medium-scale synthetic 4D LF video dataset that can be used for training deep learning-based methods. Experimental results using synthetic and real-world 4D LF videos show that temporal information contributes to the improvement of depth estimation accuracy in noisy regions. Dataset and code is available at: https://mediaeng-lfv.github.io/LFV_Disparity_Estimation

Results

TaskDatasetMetricValueModel
Disparity EstimationSintel 4D LFV - ambushfight5BadPix(0.01)62.0493Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - ambushfight5BadPix(0.03)22.8762Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - ambushfight5BadPix(0.07)8.3404Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - ambushfight5MSE*10021.67Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - thebigfight2BadPix(0.01)17.7493Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - thebigfight2BadPix(0.03)3.6084Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - thebigfight2BadPix(0.05)1.0688Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - thebigfight2MSE*1003.67Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - shaman2BadPix(0.01)74.7733Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - shaman2BadPix(0.03)50.6706Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - shaman2BadPix(0.07)32.7585Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - shaman2MSE*1002.4421Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - bamboo3BadPix(0.01)53.2985Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - bamboo3BadPix(0.03)21.8162Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - bamboo3BadPix(0.07)8.9475Two-stream CNN+CLSTM
Disparity EstimationSintel 4D LFV - bamboo3MSE*10021.59Two-stream CNN+CLSTM

Related Papers

$S^2M^2$: Scalable Stereo Matching Model for Reliable Depth Estimation2025-07-17$π^3$: Scalable Permutation-Equivariant Visual Geometry Learning2025-07-17Efficient Calisthenics Skills Classification through Foreground Instance Selection and Depth Estimation2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16MonoMVSNet: Monocular Priors Guided Multi-View Stereo Network2025-07-15Towards Depth Foundation Model: Recent Trends in Vision-Based Depth Estimation2025-07-15Cameras as Relative Positional Encoding2025-07-14ByDeWay: Boost Your multimodal LLM with DEpth prompting in a Training-Free Way2025-07-11