SMU: smooth activation function for deep networks using smoothing maximum technique

K Biswas, S Kumar, S Banerjee, AK Pandey - arXiv preprint arXiv …, 2021 - arxiv.org
Deep learning researchers have a keen interest in proposing two new novel activation
functions which can boost network performance. A good choice of activation function can …

Smooth maximum unit: Smooth activation function for deep networks using smoothing maximum technique

K Biswas, S Kumar, S Banerjee… - Proceedings of the …, 2022 - openaccess.thecvf.com
Deep learning researchers have a keen interest in proposing new novel activation functions
that can boost neural network performance. A good choice of activation function can have a …

Erfact and pserf: Non-monotonic smooth trainable activation functions

K Biswas, S Kumar, S Banerjee… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
An activation function is a crucial component of a neural network that introduces non-
linearity in the network. The state-of-the-art performance of a neural network depends also …

Natural-logarithm-rectified activation function in convolutional neural networks

Y Liu, J Zhang, C Gao, J Qu, L Ji - 2019 IEEE 5th International …, 2019 - ieeexplore.ieee.org
Activation functions playa key role in providing remarkable performance in deep neural
networks, and the rectified linear unit (ReLU) is one of the most widely used activation …

RELU-function and derived function review

Y Bai - SHS Web of Conferences, 2022 - shs-conferences.org
The activation function plays an important role in training and improving performance in
deep neural networks (dnn). The rectified linear unit (relu) function provides the necessary …

An empirical study on generalizations of the ReLU activation function

C Banerjee, T Mukherjee, E Pasiliao Jr - Proceedings of the 2019 ACM …, 2019 - dl.acm.org
Deep Neural Networks have become the tool of choice for Machine Learning practitioners
today. They have been successfully applied for solving a large class of learning problems …

Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning

HH Chieng, N Wahid, P Ong, SRK Perla - arXiv preprint arXiv:1812.06247, 2018 - arxiv.org
Activation functions are essential for deep learning methods to learn and perform complex
tasks such as image classification. Rectified Linear Unit (ReLU) has been widely used and …

Nipuna: A novel optimizer activation function for deep neural networks

G Madhu, S Kautish, KA Alnowibet, HM Zawbaa… - Axioms, 2023 - mdpi.com
In recent years, various deep neural networks with different learning paradigms have been
widely employed in various applications, including medical diagnosis, image analysis, self …

Deep learning with s-shaped rectified linear activation units

X Jin, C Xu, J Feng, Y Wei, J Xiong, S Yan - Proceedings of the AAAI …, 2016 - ojs.aaai.org
Rectified linear activation units are important components for state-of-the-art deep
convolutional networks. In this paper, we propose a novel S-shaped rectifiedlinear activation …

Review and comparison of commonly used activation functions for deep neural networks

T Szandała - Bio-inspired neurocomputing, 2021 - Springer
The primary neural networks' decision-making units are activation functions. Moreover, they
evaluate the output of networks neural node; thus, they are essential for the performance of …