TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/Conditional Instance Normalization

Conditional Instance Normalization

GeneralIntroduced 20003 papers
Source Paper

Description

Conditional Instance Normalization is a normalization technique where all convolutional weights of a style transfer network are shared across many styles. The goal of the procedure is transform a layer’s activations xxx into a normalized activation zzz specific to painting style sss. Building off instance normalization, we augment the γ\gammaγ and β\betaβ parameters so that they’re N×CN \times CN×C matrices, where NNN is the number of styles being modeled and CCC is the number of output feature maps. Conditioning on a style is achieved as follows:

z=γ_s(x−μσ)+β_s z = \gamma\_{s}\left(\frac{x - \mu}{\sigma}\right) + \beta\_{s}z=γ_s(σx−μ​)+β_s

where μ\muμ and σ\sigmaσ are xxx’s mean and standard deviation taken across spatial axes and γ_s\gamma\_{s}γ_s and β_s\beta\_{s}β_s are obtained by selecting the row corresponding to sss in the γ\gammaγ and β\betaβ matrices. One added benefit of this approach is that one can stylize a single image into NNN painting styles with a single feed forward pass of the network with a batch size of NNN.

Papers Using This Method

Multi-defect microscopy image restoration under limited data conditions2019-10-31Exploring the structure of a real-time, arbitrary neural artistic stylization network2017-05-18A Learned Representation For Artistic Style2016-10-24