Soft-Root-Sign Activation Function
- URL: http://arxiv.org/abs/2003.00547v1
- Date: Sun, 1 Mar 2020 18:38:11 GMT
- Title: Soft-Root-Sign Activation Function
- Authors: Yuan Zhou, Dandan Li, Shuwei Huo, and Sun-Yuan Kung
- Abstract summary: "Soft-Root-Sign" (SRS) is smooth, non-monotonic, and bounded.
In contrast to ReLU, SRS can adaptively adjust the output by a pair of independent trainable parameters.
Our SRS matches or exceeds models with ReLU and other state-of-the-art nonlinearities.
- Score: 21.716884634290516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The choice of activation function in deep networks has a significant effect
on the training dynamics and task performance. At present, the most effective
and widely-used activation function is ReLU. However, because of the non-zero
mean, negative missing and unbounded output, ReLU is at a potential
disadvantage during optimization. To this end, we introduce a novel activation
function to manage to overcome the above three challenges. The proposed
nonlinearity, namely "Soft-Root-Sign" (SRS), is smooth, non-monotonic, and
bounded. Notably, the bounded property of SRS distinguishes itself from most
state-of-the-art activation functions. In contrast to ReLU, SRS can adaptively
adjust the output by a pair of independent trainable parameters to capture
negative information and provide zero-mean property, which leading not only to
better generalization performance, but also to faster learning speed. It also
avoids and rectifies the output distribution to be scattered in the
non-negative real number space, making it more compatible with batch
normalization (BN) and less sensitive to initialization. In experiments, we
evaluated SRS on deep networks applied to a variety of tasks, including image
classification, machine translation and generative modelling. Our SRS matches
or exceeds models with ReLU and other state-of-the-art nonlinearities, showing
that the proposed activation function is generalized and can achieve high
performance across tasks. Ablation study further verified the compatibility
with BN and self-adaptability for different initialization.
Related papers
- ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse
LLMs [91.31204876440765]
We introduce a general method that defines neuron activation through neuron output magnitudes and a tailored magnitude threshold.
To find the most efficient activation function for sparse computation, we propose a systematic framework.
We conduct thorough experiments on LLMs utilizing different activation functions, including ReLU, SwiGLU, ReGLU, and ReLU$2$.
arXiv Detail & Related papers (2024-02-06T08:45:51Z) - Parametric Leaky Tanh: A New Hybrid Activation Function for Deep
Learning [0.0]
Activation functions (AFs) are crucial components of deep neural networks (DNNs)
We propose a novel hybrid activation function designed to combine the strengths of both the Tanh and Leaky ReLU activation functions.
PLanh is differentiable at all points and addresses the 'dying ReLU' problem by ensuring a non-zero gradient for negative inputs.
arXiv Detail & Related papers (2023-08-11T08:59:27Z) - TSSR: A Truncated and Signed Square Root Activation Function for Neural
Networks [5.9622541907827875]
We introduce a new activation function called the Truncated and Signed Square Root (TSSR) function.
This function is distinctive because it is odd, nonlinear, monotone and differentiable.
It has the potential to improve the numerical stability of neural networks.
arXiv Detail & Related papers (2023-08-09T09:40:34Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Saturated Non-Monotonic Activation Functions [21.16866749728754]
We present three new activation functions built with our proposed method: SGELU, SSiLU, and SMish, which are composed of the negative portion of GELU, SiLU, and Mish, respectively, and ReLU's positive portion.
The results of image classification experiments on CIFAR-100 indicate that our proposed activation functions are highly effective and outperform state-of-the-art baselines across multiple deep learning architectures.
arXiv Detail & Related papers (2023-05-12T15:01:06Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Transformers with Learnable Activation Functions [63.98696070245065]
We use Rational Activation Function (RAF) to learn optimal activation functions during training according to input data.
RAF opens a new research direction for analyzing and interpreting pre-trained models according to the learned activation functions.
arXiv Detail & Related papers (2022-08-30T09:47:31Z) - Growing Cosine Unit: A Novel Oscillatory Activation Function That Can
Speedup Training and Reduce Parameters in Convolutional Neural Networks [0.1529342790344802]
Convolution neural networks have been successful in solving many socially important and economically significant problems.
Key discovery that made training deep networks feasible was the adoption of the Rectified Linear Unit (ReLU) activation function.
New activation function C(z) = z cos z outperforms Sigmoids, Swish, Mish and ReLU on a variety of architectures.
arXiv Detail & Related papers (2021-08-30T01:07:05Z) - Style Normalization and Restitution for DomainGeneralization and
Adaptation [88.86865069583149]
An effective domain generalizable model is expected to learn feature representations that are both generalizable and discriminative.
In this paper, we design a novel Style Normalization and Restitution module (SNR) to ensure both high generalization and discrimination capability of the networks.
arXiv Detail & Related papers (2021-01-03T09:01:39Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z) - Investigating the interaction between gradient-only line searches and
different activation functions [0.0]
Gradient-only line searches (GOLS) adaptively determine step sizes along search directions for discontinuous loss functions in neural network training.
We find that GOLS are robust for a range of activation functions, but sensitive to the Rectified Linear Unit (ReLU) activation function in standard feedforward architectures.
arXiv Detail & Related papers (2020-02-23T12:28:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.