TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Progressive Focused Transformer for Single Image Super-Res...

Progressive Focused Transformer for Single Image Super-Resolution

Wei Long, Xingyu Zhou, Leheng Zhang, Shuhang Gu

2025-03-26CVPR 2025 1Super-ResolutionImage Super-Resolution
PaperPDFCode(official)

Abstract

Transformer-based methods have achieved remarkable results in image super-resolution tasks because they can capture non-local dependencies in low-quality input images. However, this feature-intensive modeling approach is computationally expensive because it calculates the similarities between numerous features that are irrelevant to the query features when obtaining attention weights. These unnecessary similarity calculations not only degrade the reconstruction performance but also introduce significant computational overhead. How to accurately identify the features that are important to the current query features and avoid similarity calculations between irrelevant features remains an urgent problem. To address this issue, we propose a novel and effective Progressive Focused Transformer (PFT) that links all isolated attention maps in the network through Progressive Focused Attention (PFA) to focus attention on the most important tokens. PFA not only enables the network to capture more critical similar features, but also significantly reduces the computational cost of the overall network by filtering out irrelevant features before calculating similarities. Extensive experiments demonstrate the effectiveness of the proposed method, achieving state-of-the-art performance on various single image super-resolution benchmarks.

Results

TaskDatasetMetricValueModel
Super-ResolutionSet14 - 4x upscalingPSNR29.29PFT
Super-ResolutionSet14 - 4x upscalingSSIM0.7978PFT
Super-ResolutionManga109 - 4x upscalingPSNR32.63PFT
Super-ResolutionManga109 - 4x upscalingSSIM0.9306PFT
Super-ResolutionUrban100 - 4x upscalingPSNR28.2PFT
Super-ResolutionUrban100 - 4x upscalingSSIM0.8412PFT
Image Super-ResolutionSet14 - 4x upscalingPSNR29.29PFT
Image Super-ResolutionSet14 - 4x upscalingSSIM0.7978PFT
Image Super-ResolutionManga109 - 4x upscalingPSNR32.63PFT
Image Super-ResolutionManga109 - 4x upscalingSSIM0.9306PFT
Image Super-ResolutionUrban100 - 4x upscalingPSNR28.2PFT
Image Super-ResolutionUrban100 - 4x upscalingSSIM0.8412PFT
3D Object Super-ResolutionSet14 - 4x upscalingPSNR29.29PFT
3D Object Super-ResolutionSet14 - 4x upscalingSSIM0.7978PFT
3D Object Super-ResolutionManga109 - 4x upscalingPSNR32.63PFT
3D Object Super-ResolutionManga109 - 4x upscalingSSIM0.9306PFT
3D Object Super-ResolutionUrban100 - 4x upscalingPSNR28.2PFT
3D Object Super-ResolutionUrban100 - 4x upscalingSSIM0.8412PFT
16kSet14 - 4x upscalingPSNR29.29PFT
16kSet14 - 4x upscalingSSIM0.7978PFT
16kManga109 - 4x upscalingPSNR32.63PFT
16kManga109 - 4x upscalingSSIM0.9306PFT
16kUrban100 - 4x upscalingPSNR28.2PFT
16kUrban100 - 4x upscalingSSIM0.8412PFT

Related Papers

SpectraLift: Physics-Guided Spectral-Inversion Network for Self-Supervised Hyperspectral Image Super-Resolution2025-07-17IM-LUT: Interpolation Mixing Look-Up Tables for Image Super-Resolution2025-07-14PanoDiff-SR: Synthesizing Dental Panoramic Radiographs using Diffusion and Super-resolution2025-07-12HNOSeg-XS: Extremely Small Hartley Neural Operator for Efficient and Resolution-Robust 3D Image Segmentation2025-07-104KAgent: Agentic Any Image to 4K Super-Resolution2025-07-09EAMamba: Efficient All-Around Vision State Space Model for Image Restoration2025-06-27Leveraging Vision-Language Models to Select Trustworthy Super-Resolution Samples Generated by Diffusion Models2025-06-25Unsupervised Image Super-Resolution Reconstruction Based on Real-World Degradation Patterns2025-06-20