TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning from History: Task-agnostic Model Contrastive Lea...

Learning from History: Task-agnostic Model Contrastive Learning for Image Restoration

Gang Wu, Junjun Jiang, Kui Jiang, Xianming Liu

2023-09-12Super-ResolutionRain RemovalImage DehazingImage Super-ResolutionContrastive LearningImage Restoration
PaperPDFCodeCode(official)

Abstract

Contrastive learning has emerged as a prevailing paradigm for high-level vision tasks, which, by introducing properly negative samples, has also been exploited for low-level vision tasks to achieve a compact optimization space to account for their ill-posed nature. However, existing methods rely on manually predefined and task-oriented negatives, which often exhibit pronounced task-specific biases. To address this challenge, our paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself. Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks. We propose the Self-Prior guided Negative loss (SPN) to enable it. This approach significantly enhances existing models when retrained with the proposed model contrastive paradigm. The results show significant improvements in image restoration across various tasks and architectures. For example, models retrained with SPN outperform the original FFANet and DehazeFormer by 3.41 dB and 0.57 dB on the RESIDE indoor dataset for image dehazing. Similarly, they achieve notable improvements of 0.47 dB on SPA-Data over IDT for image deraining and 0.12 dB on Manga109 for a 4x scale super-resolution over lightweight SwinIR, respectively. Code and retrained models are available at https://github.com/Aitical/MCLIR.

Results

TaskDatasetMetricValueModel
Super-ResolutionManga109 - 4x upscalingPSNR31.75+SPN
Super-ResolutionManga109 - 4x upscalingSSIM0.9229+SPN
Image Super-ResolutionManga109 - 4x upscalingPSNR31.75+SPN
Image Super-ResolutionManga109 - 4x upscalingSSIM0.9229+SPN
3D Object Super-ResolutionManga109 - 4x upscalingPSNR31.75+SPN
3D Object Super-ResolutionManga109 - 4x upscalingSSIM0.9229+SPN
16kManga109 - 4x upscalingPSNR31.75+SPN
16kManga109 - 4x upscalingSSIM0.9229+SPN

Related Papers

SpectraLift: Physics-Guided Spectral-Inversion Network for Self-Supervised Hyperspectral Image Super-Resolution2025-07-17SemCSE: Semantic Contrastive Sentence Embeddings Using LLM-Generated Summaries For Scientific Abstracts2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17Overview of the TalentCLEF 2025: Skill and Job Title Intelligence for Human Capital Management2025-07-17SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Unsupervised Part Discovery via Descriptor-Based Masked Image Restoration with Optimized Constraints2025-07-16LLM-Driven Dual-Level Multi-Interest Modeling for Recommendation2025-07-15