TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Complexity Experts are Task-Discriminative Learners for An...

Complexity Experts are Task-Discriminative Learners for Any Image Restoration

Eduard Zamfir, Zongwei Wu, Nancy Mehta, Yuedong Tan, Danda Pani Paudel, Yulun Zhang, Radu Timofte

2024-11-27CVPR 2025 1AttributeImage Restoration
PaperPDF

Abstract

Recent advancements in all-in-one image restoration models have revolutionized the ability to address diverse degradations through a unified framework. However, parameters tied to specific tasks often remain inactive for other tasks, making mixture-of-experts (MoE) architectures a natural extension. Despite this, MoEs often show inconsistent behavior, with some experts unexpectedly generalizing across tasks while others struggle within their intended scope. This hinders leveraging MoEs' computational benefits by bypassing irrelevant experts during inference. We attribute this undesired behavior to the uniform and rigid architecture of traditional MoEs. To address this, we introduce ``complexity experts" -- flexible expert blocks with varying computational complexity and receptive fields. A key challenge is assigning tasks to each expert, as degradation complexity is unknown in advance. Thus, we execute tasks with a simple bias toward lower complexity. To our surprise, this preference effectively drives task-specific allocation, assigning tasks to experts with the appropriate complexity. Extensive experiments validate our approach, demonstrating the ability to bypass irrelevant experts during inference while maintaining superior performance. The proposed MoCE-IR model outperforms state-of-the-art methods, affirming its efficiency and practical applicability. The source will be publicly made available at \href{https://eduardzamfir.github.io/moceir/}{\texttt{eduardzamfir.github.io/MoCE-IR/}}

Results

TaskDatasetMetricValueModel
Image Restoration5-DegradationsAverage PSNR30.58MoCE-IR
Image Restoration5-DegradationsSSIM0.919MoCE-IR
Image Restoration3-DegradationsAverage PSNR32.73MoCE-IR
Image Restoration3-DegradationsSSIM0.917MoCE-IR
10-shot image generation5-DegradationsAverage PSNR30.58MoCE-IR
10-shot image generation5-DegradationsSSIM0.919MoCE-IR
10-shot image generation3-DegradationsAverage PSNR32.73MoCE-IR
10-shot image generation3-DegradationsSSIM0.917MoCE-IR
Unified Image Restoration5-DegradationsAverage PSNR30.58MoCE-IR
Unified Image Restoration5-DegradationsSSIM0.919MoCE-IR
Unified Image Restoration3-DegradationsAverage PSNR32.73MoCE-IR
Unified Image Restoration3-DegradationsSSIM0.917MoCE-IR

Related Papers

MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Non-Adaptive Adversarial Face Generation2025-07-16Unsupervised Part Discovery via Descriptor-Based Masked Image Restoration with Optimized Constraints2025-07-16Attributes Shape the Embedding Space of Face Recognition Models2025-07-15COLIBRI Fuzzy Model: Color Linguistic-Based Representation and Interpretation2025-07-15Ref-Long: Benchmarking the Long-context Referencing Capability of Long-context Language Models2025-07-13Model Parallelism With Subnetwork Data Parallelism2025-07-11Bradley-Terry and Multi-Objective Reward Modeling Are Complementary2025-07-10