TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Progressive Semantic-Aware Style Transformation for Blind ...

Progressive Semantic-Aware Style Transformation for Blind Face Restoration

Chaofeng Chen, Xiaoming Li, Lingbo Yang, Xianhui Lin, Lei Zhang, Kwan-Yee K. Wong

2020-09-18CVPR 2021 1Semantic ParsingFace ParsingStyle TransferBlind Face Restoration
PaperPDFCode(official)

Abstract

Face restoration is important in face image processing, and has been widely studied in recent years. However, previous works often fail to generate plausible high quality (HQ) results for real-world low quality (LQ) face images. In this paper, we propose a new progressive semantic-aware style transformation framework, named PSFR-GAN, for face restoration. Specifically, instead of using an encoder-decoder framework as previous methods, we formulate the restoration of LQ face images as a multi-scale progressive restoration procedure through semantic-aware style transformation. Given a pair of LQ face image and its corresponding parsing map, we first generate a multi-scale pyramid of the inputs, and then progressively modulate different scale features from coarse-to-fine in a semantic-aware style transfer way. Compared with previous networks, the proposed PSFR-GAN makes full use of the semantic (parsing maps) and pixel (LQ images) space information from different scales of input pairs. In addition, we further introduce a semantic aware style loss which calculates the feature style loss for each semantic region individually to improve the details of face textures. Finally, we pretrain a face parsing network which can generate decent parsing maps from real-world LQ face images. Experiment results show that our model trained with synthetic data can not only produce more realistic high-resolution results for synthetic LQ inputs and but also generalize better to natural LQ face images compared with state-of-the-art methods. Codes are available at https://github.com/chaofengc/PSFRGAN.

Results

TaskDatasetMetricValueModel
Blind Face RestorationCelebA-TestDeg.39.69PSFRGAN
Blind Face RestorationCelebA-TestFID47.59PSFRGAN
Blind Face RestorationCelebA-TestLPIPS42.4PSFRGAN
Blind Face RestorationCelebA-TestNIQE5.123PSFRGAN
Blind Face RestorationCelebA-TestPSNR24.71PSFRGAN
Blind Face RestorationCelebA-TestSSIM0.6557PSFRGAN

Related Papers

Transferring Styles for Reduced Texture Bias and Improved Robustness in Semantic Segmentation Networks2025-07-14AnyI2V: Animating Any Conditional Image with Motion Control2025-07-03Hita: Holistic Tokenizer for Autoregressive Image Generation2025-07-03Where, What, Why: Towards Explainable Driver Attention Prediction2025-06-29SA-LUT: Spatial Adaptive 4D Look-Up Table for Photorealistic Style Transfer2025-06-16Fine-Grained control over Music Generation with Activation Steering2025-06-11Training-Free Identity Preservation in Stylized Image Generation Using Diffusion Models2025-06-07Towards Better Disentanglement in Non-Autoregressive Zero-Shot Expressive Voice Conversion2025-06-04