TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/RReLU

RReLU

Randomized Leaky Rectified Linear Units

GeneralIntroduced 20003 papers
Source Paper

Description

Randomized Leaky Rectified Linear Units, or RReLU, are an activation function that randomly samples the negative slope for activation values. It was first proposed and used in the Kaggle NDSB Competition. During training, a_jia\_{ji}a_ji is a random number sampled from a uniform distribution U(l,u)U\left(l, u\right)U(l,u). Formally:

y_ji=x_ji if x_ji≥0y\_{ji} = x\_{ji} \text{ if } x\_{ji} \geq{0}y_ji=x_ji if x_ji≥0 y_ji=a_jix_ji if x_ji<0y\_{ji} = a\_{ji}x\_{ji} \text{ if } x\_{ji} < 0y_ji=a_jix_ji if x_ji<0

where

α_ji∼U(l,u),l<u and l,u∈[0,1)\alpha\_{ji} \sim U\left(l, u\right), l < u \text{ and } l, u \in \left[0,1\right)α_ji∼U(l,u),l<u and l,u∈[0,1)

In the test phase, we take average of all the a_jia\_{ji}a_ji in training similar to dropout, and thus set a_jia\_{ji}a_ji to l+u2\frac{l+u}{2}2l+u​ to get a deterministic result. As suggested by the NDSB competition winner, a_jia\_{ji}a_ji is sampled from U(3,8)U\left(3, 8\right)U(3,8).

At test time, we use:

y_ji=x_jil+u2y\_{ji} = \frac{x\_{ji}}{\frac{l+u}{2}}y_ji=2l+u​x_ji​

Papers Using This Method

Non-Linearities Improve OrigiNet based on Active Imaging for Micro Expression Recognition2020-05-16Towards Single-phase Single-stage Detection of Pulmonary Nodules in Chest CT Imaging2018-07-16Empirical Evaluation of Rectified Activations in Convolutional Network2015-05-05