TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/SERLU

SERLU

GeneralIntroduced 20001 papers
Source Paper

Description

SERLU, or Scaled Exponentially-Regularized Linear Unit, is a type of activation function. The new function introduces a bump-shaped function in the region of negative input. The bump-shaped function has approximately zero response to large negative input while being able to push the output of SERLU towards zero mean statistically.

SERLU(x))=λ_serlux if x≥0\text{SERLU}\left(x\right)) = \lambda\_{serlu}x \text{ if } x \geq 0SERLU(x))=λ_serlux if x≥0 SERLU(x))=λ_serluα_serluxex if x<0\text{SERLU}\left(x\right)) = \lambda\_{serlu}\alpha\_{serlu}xe^{x} \text{ if } x < 0SERLU(x))=λ_serluα_serluxex if x<0

where the two parameters λ_serlu>0\lambda\_{serlu} > 0λ_serlu>0 and α_serlu>0\alpha\_{serlu} > 0α_serlu>0 remain to be specified.

Papers Using This Method

Effectiveness of Scaled Exponentially-Regularized Linear Units (SERLUs)2018-07-26