TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Revision Network

Revision Network

Computer VisionIntroduced 20004 papers
Source Paper

Description

Revision Network is a style transfer module that aims to revise the rough stylized image via generating residual details image rcsr_{c s}rcs​, while the final stylized image is generated by combining r_csr\_{c s}r_cs and rough stylized image xˉ_cs\bar{x}\_{c s}xˉ_cs. This procedure ensures that the distribution of global style pattern in xˉ_cs\bar{x}\_{c s}xˉ_cs is properly kept. Meanwhile, learning to revise local style patterns with residual details image is easier for the Revision Network.

As shown in the Figure, the Revision Network is designed as a simple yet effective encoder-decoder architecture, with only one down-sampling and one up-sampling layer. Further, a patch discriminator is used to help Revision Network to capture fine patch textures under adversarial learning setting. The patch discriminator DDD is defined following SinGAN, where DDD owns 5 convolution layers and 32 hidden channels. A relatively shallow DDD is chosen to (1) avoid overfitting since we only have one style image and (2) control the receptive field to ensure D can only capture local patterns.

Papers Using This Method

FSC: Few-point Shape Completion2024-03-12Attribute Localization and Revision Network for Zero-Shot Learning2023-10-11Arbitrary Style Transfer with Structure Enhancement by Combining the Global and Local Loss2022-07-23Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer2021-04-12