An Analysis of State-of-the-art Activation Functions For Supervised Deep
Neural Network
- URL: http://arxiv.org/abs/2104.02523v1
- Date: Mon, 5 Apr 2021 16:50:31 GMT
- Title: An Analysis of State-of-the-art Activation Functions For Supervised Deep
Neural Network
- Authors: Anh Nguyen, Khoa Pham, Dat Ngo, Thanh Ngo, Lam Pham
- Abstract summary: This paper provides an analysis of state-of-the-art activation functions with respect to supervised classification of deep neural network.
Experiments over two deep learning network architectures integrating these activation functions are conducted.
- Score: 11.363694566368153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper provides an analysis of state-of-the-art activation functions with
respect to supervised classification of deep neural network. These activation
functions comprise of Rectified Linear Units (ReLU), Exponential Linear Unit
(ELU), Scaled Exponential Linear Unit (SELU), Gaussian Error Linear Unit
(GELU), and the Inverse Square Root Linear Unit (ISRLU). To evaluate,
experiments over two deep learning network architectures integrating these
activation functions are conducted. The first model, basing on Multilayer
Perceptron (MLP), is evaluated with MNIST dataset to perform these activation
functions. Meanwhile, the second model, likely VGGish-based architecture, is
applied for Acoustic Scene Classification (ASC) Task 1A in DCASE 2018
challenge, thus evaluate whether these activation functions work well in
different datasets as well as different network architectures.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - ENN: A Neural Network with DCT Adaptive Activation Functions [2.2713084727838115]
We present Expressive Neural Network (ENN), a novel model in which the non-linear activation functions are modeled using the Discrete Cosine Transform (DCT)
This parametrization keeps the number of trainable parameters low, is appropriate for gradient-based schemes, and adapts to different learning tasks.
The performance of ENN outperforms state of the art benchmarks, providing above a 40% gap in accuracy in some scenarios.
arXiv Detail & Related papers (2023-07-02T21:46:30Z) - ASU-CNN: An Efficient Deep Architecture for Image Classification and
Feature Visualizations [0.0]
Activation functions play a decisive role in determining the capacity of Deep Neural Networks.
In this paper, a Convolutional Neural Network model named as ASU-CNN is proposed.
The network achieved promising results on both training and testing data for the classification of CIFAR-10.
arXiv Detail & Related papers (2023-05-28T16:52:25Z) - Evaluating CNN with Oscillatory Activation Function [0.0]
CNNs capability to learn high-dimensional complex features from the images is the non-linearity introduced by the activation function.
This paper explores the performance of one of the CNN architecture ALexNet on MNIST and CIFAR10 datasets using oscillating activation function (GCU) and some other commonly used activation functions like ReLu, PReLu, and Mish.
arXiv Detail & Related papers (2022-11-13T11:17:13Z) - Transformers with Learnable Activation Functions [63.98696070245065]
We use Rational Activation Function (RAF) to learn optimal activation functions during training according to input data.
RAF opens a new research direction for analyzing and interpreting pre-trained models according to the learned activation functions.
arXiv Detail & Related papers (2022-08-30T09:47:31Z) - Look-Ahead Acquisition Functions for Bernoulli Level Set Estimation [9.764638397706717]
We derive analytic expressions for look-ahead posteriors of sublevel set membership.
We show how these lead to analytic expressions for a class of look-ahead LSE acquisition functions.
arXiv Detail & Related papers (2022-03-18T05:25:35Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Learning specialized activation functions with the Piecewise Linear Unit [7.820667552233989]
We propose a new activation function called Piecewise Linear Unit(PWLU), which incorporates a carefully designed formulation and learning method.
It can learn specialized activation functions and achieves SOTA performance on large-scale datasets like ImageNet and COCO.
PWLU is also easy to implement and efficient at inference, which can be widely applied in real-world applications.
arXiv Detail & Related papers (2021-04-08T11:29:11Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - An Investigation of Potential Function Designs for Neural CRF [75.79555356970344]
In this paper, we investigate a series of increasingly expressive potential functions for neural CRF models.
Our experiments show that the decomposed quadrilinear potential function based on the vector representations of two neighboring labels and two neighboring words consistently achieves the best performance.
arXiv Detail & Related papers (2020-11-11T07:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.