TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MDFlow: Unsupervised Optical Flow Learning by Reliable Mut...

MDFlow: Unsupervised Optical Flow Learning by Reliable Mutual Knowledge Distillation

Lingtong Kong, Jie Yang

2022-11-11Optical Flow EstimationData AugmentationKnowledge DistillationBlocking
PaperPDFCode(official)

Abstract

Recent works have shown that optical flow can be learned by deep networks from unlabelled image pairs based on brightness constancy assumption and smoothness prior. Current approaches additionally impose an augmentation regularization term for continual self-supervision, which has been proved to be effective on difficult matching regions. However, this method also amplify the inevitable mismatch in unsupervised setting, blocking the learning process towards optimal solution. To break the dilemma, we propose a novel mutual distillation framework to transfer reliable knowledge back and forth between the teacher and student networks for alternate improvement. Concretely, taking estimation of off-the-shelf unsupervised approach as pseudo labels, our insight locates at defining a confidence selection mechanism to extract relative good matches, and then add diverse data augmentation for distilling adequate and reliable knowledge from teacher to student. Thanks to the decouple nature of our method, we can choose a stronger student architecture for sufficient learning. Finally, better student prediction is adopted to transfer knowledge back to the efficient teacher without additional costs in real deployment. Rather than formulating it as a supervised task, we find that introducing an extra unsupervised term for multi-target learning achieves best final results. Extensive experiments show that our approach, termed MDFlow, achieves state-of-the-art real-time accuracy and generalization ability on challenging benchmarks. Code is available at https://github.com/ltkong218/MDFlow.

Results

TaskDatasetMetricValueModel
Optical Flow EstimationSintel Clean unsupervisedAverage End-Point Error4.16MDFlow
Optical Flow EstimationSintel Clean unsupervisedAverage End-Point Error4.73MDFlow-Fast
Optical Flow EstimationSintel Final unsupervisedAverage End-Point Error5.46MDFlow
Optical Flow EstimationSintel Final unsupervisedAverage End-Point Error5.99MDFlow-Fast
Optical Flow EstimationKITTI 2015 unsupervisedFl-all8.91MDFlow
Optical Flow EstimationKITTI 2015 unsupervisedFl-all11.43MDFlow-Fast

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Data Augmentation in Time Series Forecasting through Inverted Framework2025-07-15