Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare
- URL: http://arxiv.org/abs/2011.11834v1
- Date: Tue, 24 Nov 2020 01:53:39 GMT
- Title: Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare
- Authors: Loris Nanni, Alessandra Lumini, Stefano Ghidoni and Gianluca Maguolo
- Abstract summary: We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
- Score: 77.99636165307996
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification of biological images is an important task with crucial
application in many fields, such as cell phenotypes recognition, detection of
cell organelles and histopathological classification, and it might help in
early medical diagnosis, allowing automatic disease classification without the
need of a human expert. In this paper we classify biomedical images using
ensembles of neural networks. We create this ensemble using a ResNet50
architecture and modifying its activation layers by substituting ReLUs with
other functions. We select our activations among the following ones: ReLU,
leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped
ReLU, Swish , Mish, Mexican Linear Unit, Gaussian Linear Unit, Parametric
Deformable Linear Unit, Soft Root Sign (SRS) and others.
As a baseline, we used an ensemble of neural networks that only use ReLU
activations. We tested our networks on several small and medium sized
biomedical image datasets. Our results prove that our best ensemble obtains a
better performance than the ones of the naive approaches. In order to encourage
the reproducibility of this work, the MATLAB code of all the experiments will
be shared at https://github.com/LorisNanni.
Related papers
- CAF-YOLO: A Robust Framework for Multi-Scale Lesion Detection in Biomedical Imagery [0.0682074616451595]
CAF-YOLO is a nimble yet robust method for medical object detection that leverages the strengths of convolutional neural networks (CNNs) and transformers.
ACFM module enhances the modeling of both global and local features, enabling the capture of long-term feature dependencies.
MSNN improves multi-scale information aggregation by extracting features across diverse scales.
arXiv Detail & Related papers (2024-08-04T01:44:44Z) - ReLUs Are Sufficient for Learning Implicit Neural Representations [17.786058035763254]
We revisit the use of ReLU activation functions for learning implicit neural representations.
Inspired by second order B-spline wavelets, we incorporate a set of simple constraints to the ReLU neurons in each layer of a deep neural network (DNN)
We demonstrate that, contrary to popular belief, one can learn state-of-the-art INRs based on a DNN composed of only ReLU neurons.
arXiv Detail & Related papers (2024-06-04T17:51:08Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - A Comprehensive Survey and Performance Analysis of Activation Functions
in Deep Learning [23.83339228535986]
Various types of neural networks have been introduced to deal with different types of problems.
The main goal of any neural network is to transform the non-linearly separable input data into more linearly separable abstract features.
The most popular and common non-linearity layers are activation functions (AFs), such as Logistic Sigmoid, Tanh, ReLU, ELU, Swish and Mish.
arXiv Detail & Related papers (2021-09-29T16:41:19Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Deep ensembles based on Stochastic Activation Selection for Polyp
Segmentation [82.61182037130406]
This work deals with medical image segmentation and in particular with accurate polyp detection and segmentation during colonoscopy examinations.
Basic architecture in image segmentation consists of an encoder and a decoder.
We compare some variant of the DeepLab architecture obtained by varying the decoder backbone.
arXiv Detail & Related papers (2021-04-02T02:07:37Z) - Neural Cellular Automata Manifold [84.08170531451006]
We show that the neural network architecture of the Neural Cellular Automata can be encapsulated in a larger NN.
This allows us to propose a new model that encodes a manifold of NCA, each of them capable of generating a distinct image.
In biological terms, our approach would play the role of the transcription factors, modulating the mapping of genes into specific proteins that drive cellular differentiation.
arXiv Detail & Related papers (2020-06-22T11:41:57Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.