TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Feature-domain Adaptive Contrastive Distillation for Effic...

Feature-domain Adaptive Contrastive Distillation for Efficient Single Image Super-Resolution

HyeonCheol Moon, Jinwoo Jeong, Sungjei Kim

2022-11-29Super-ResolutionImage Super-ResolutionKnowledge Distillation
PaperPDF

Abstract

Recently, CNN-based SISR has numerous parameters and high computational cost to achieve better performance, limiting its applicability to resource-constrained devices such as mobile. As one of the methods to make the network efficient, Knowledge Distillation (KD), which transfers teacher's useful knowledge to student, is currently being studied. More recently, KD for SISR utilizes Feature Distillation (FD) to minimize the Euclidean distance loss of feature maps between teacher and student networks, but it does not sufficiently consider how to effectively and meaningfully deliver knowledge from teacher to improve the student performance at given network capacity constraints. In this paper, we propose a feature-domain adaptive contrastive distillation (FACD) method for efficiently training lightweight student SISR networks. We show the limitations of the existing FD methods using Euclidean distance loss, and propose a feature-domain contrastive loss that makes a student network learn richer information from the teacher's representation in the feature domain. In addition, we propose an adaptive distillation that selectively applies distillation depending on the conditions of the training patches. The experimental results show that the student EDSR and RCAN networks with the proposed FACD scheme improves not only the PSNR performance of the entire benchmark datasets and scales, but also the subjective image quality compared to the conventional FD approaches.

Results

TaskDatasetMetricValueModel
Super-ResolutionSet5 - 3x upscalingPSNR34.729FACD
Super-ResolutionUrban100 - 2x upscalingPSNR32.878FACD
Super-ResolutionSet5 - 2x upscalingPSNR38.242FACD
Super-ResolutionUrban100 - 4x upscalingPSNR26.606FACD
Super-ResolutionUrban100 - 3x upscalingPSNR28.818FACD
Image Super-ResolutionSet5 - 3x upscalingPSNR34.729FACD
Image Super-ResolutionUrban100 - 2x upscalingPSNR32.878FACD
Image Super-ResolutionSet5 - 2x upscalingPSNR38.242FACD
Image Super-ResolutionUrban100 - 4x upscalingPSNR26.606FACD
Image Super-ResolutionUrban100 - 3x upscalingPSNR28.818FACD
3D Object Super-ResolutionSet5 - 3x upscalingPSNR34.729FACD
3D Object Super-ResolutionUrban100 - 2x upscalingPSNR32.878FACD
3D Object Super-ResolutionSet5 - 2x upscalingPSNR38.242FACD
3D Object Super-ResolutionUrban100 - 4x upscalingPSNR26.606FACD
3D Object Super-ResolutionUrban100 - 3x upscalingPSNR28.818FACD
16kSet5 - 3x upscalingPSNR34.729FACD
16kUrban100 - 2x upscalingPSNR32.878FACD
16kSet5 - 2x upscalingPSNR38.242FACD
16kUrban100 - 4x upscalingPSNR26.606FACD
16kUrban100 - 3x upscalingPSNR28.818FACD

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21SpectraLift: Physics-Guided Spectral-Inversion Network for Self-Supervised Hyperspectral Image Super-Resolution2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15IM-LUT: Interpolation Mixing Look-Up Tables for Image Super-Resolution2025-07-14Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution2025-07-12