PAON: A New Neuron Model using Padé Approximants
- URL: http://arxiv.org/abs/2403.11791v1
- Date: Mon, 18 Mar 2024 13:49:30 GMT
- Title: PAON: A New Neuron Model using Padé Approximants
- Authors: Onur Keleş, A. Murat Tekalp,
- Abstract summary: Convolutional neural networks (CNN) are built upon the classical McCulloch-Pitts neuron model.
We introduce a brand new neuron model called Pade neurons (Paons), inspired by the Pade approximants.
Our experiments on the single-image super-resolution task show that PadeNets can obtain better results than competing architectures.
- Score: 6.337675203577426
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Convolutional neural networks (CNN) are built upon the classical McCulloch-Pitts neuron model, which is essentially a linear model, where the nonlinearity is provided by a separate activation function. Several researchers have proposed enhanced neuron models, including quadratic neurons, generalized operational neurons, generative neurons, and super neurons, with stronger nonlinearity than that provided by the pointwise activation function. There has also been a proposal to use Pade approximation as a generalized activation function. In this paper, we introduce a brand new neuron model called Pade neurons (Paons), inspired by the Pade approximants, which is the best mathematical approximation of a transcendental function as a ratio of polynomials with different orders. We show that Paons are a super set of all other proposed neuron models. Hence, the basic neuron in any known CNN model can be replaced by Paons. In this paper, we extend the well-known ResNet to PadeNet (built by Paons) to demonstrate the concept. Our experiments on the single-image super-resolution task show that PadeNets can obtain better results than competing architectures.
Related papers
- No One-Size-Fits-All Neurons: Task-based Neurons for Artificial Neural Networks [25.30801109401654]
Since the human brain is a task-based neuron user, can the artificial network design go from the task-based architecture design to the task-based neuron design?
We propose a two-step framework for prototyping task-based neurons.
Experiments show that the proposed task-based neuron design is not only feasible but also delivers competitive performance over other state-of-the-art models.
arXiv Detail & Related papers (2024-05-03T09:12:46Z) - QuasiNet: a neural network with trainable product layers [0.0]
We propose a new neural network model inspired by existing neural network models with so called product neurons and a learning rule derived from classical error backpropagation.
Our results indicate that our model is clearly more successful than the classical and has the potential to be used in many tasks and applications.
arXiv Detail & Related papers (2023-11-21T18:56:15Z) - Efficient Vectorized Backpropagation Algorithms for Training Feedforward Networks Composed of Quadratic Neurons [1.6574413179773761]
This paper presents a solution to the XOR problem with a single quadratic neuron.
It shows that any dataset composed of $mathcalC$ bounded clusters can be separated with only a single layer of $mathcalC$ quadratic neurons.
arXiv Detail & Related papers (2023-10-04T15:39:57Z) - Neural network with optimal neuron activation functions based on
additive Gaussian process regression [0.0]
More flexible neuron activation functions would allow using fewer neurons and layers and improve expressive power.
We show that additive Gaussian process regression (GPR) can be used to construct optimal neuron activation functions that are individual to each neuron.
An approach is also introduced that avoids non-linear fitting of neural network parameters.
arXiv Detail & Related papers (2023-01-13T14:19:17Z) - Parametrized constant-depth quantum neuron [56.51261027148046]
We propose a framework that builds quantum neurons based on kernel machines.
We present here a neuron that applies a tensor-product feature mapping to an exponentially larger space.
It turns out that parametrization allows the proposed neuron to optimally fit underlying patterns that the existing neuron cannot fit.
arXiv Detail & Related papers (2022-02-25T04:57:41Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.