TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/SMU: smooth activation function for deep networks using sm...

SMU: smooth activation function for deep networks using smoothing maximum technique

Koushik Biswas, Sandeep Kumar, Shilpak Banerjee, Ashish Kumar Pandey

2021-11-08Deep Learning
PaperPDFCodeCodeCodeCodeCodeCode

Abstract

Deep learning researchers have a keen interest in proposing two new novel activation functions which can boost network performance. A good choice of activation function can have significant consequences in improving network performance. A handcrafted activation is the most common choice in neural network models. ReLU is the most common choice in the deep learning community due to its simplicity though ReLU has some serious drawbacks. In this paper, we have proposed a new novel activation function based on approximation of known activation functions like Leaky ReLU, and we call this function Smooth Maximum Unit (SMU). Replacing ReLU by SMU, we have got 6.22% improvement in the CIFAR100 dataset with the ShuffleNet V2 model.

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18A Survey of Deep Learning for Geometry Problem Solving2025-07-16Uncertainty Quantification for Motor Imagery BCI -- Machine Learning vs. Deep Learning2025-07-10Chat-Ghosting: A Comparative Study of Methods for Auto-Completion in Dialog Systems2025-07-08Deep Learning Optimization of Two-State Pinching Antennas Systems2025-07-08AXLearn: Modular Large Model Training on Heterogeneous Infrastructure2025-07-07Determination Of Structural Cracks Using Deep Learning Frameworks2025-07-03Generalized Adaptive Transfer Network: Enhancing Transfer Learning in Reinforcement Learning Across Domains2025-07-02