QuasiNet: a neural network with trainable product layers
- URL: http://arxiv.org/abs/2401.06137v2
- Date: Mon, 26 Feb 2024 13:10:22 GMT
- Title: QuasiNet: a neural network with trainable product layers
- Authors: Krist\'ina Malinovsk\'a, Slavom\'ir Holenda and \v{L}udov\'it
Malinovsk\'y
- Abstract summary: We propose a new neural network model inspired by existing neural network models with so called product neurons and a learning rule derived from classical error backpropagation.
Our results indicate that our model is clearly more successful than the classical and has the potential to be used in many tasks and applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classical neural networks achieve only limited convergence in hard problems
such as XOR or parity when the number of hidden neurons is small. With the
motivation to improve the success rate of neural networks in these problems, we
propose a new neural network model inspired by existing neural network models
with so called product neurons and a learning rule derived from classical error
backpropagation, which elegantly solves the problem of mutually exclusive
situations. Unlike existing product neurons, which have weights that are preset
and not adaptable, our product layers of neurons also do learn. We tested the
model and compared its success rate to a classical multilayer perceptron in the
aforementioned problems as well as in other hard problems such as the two
spirals. Our results indicate that our model is clearly more successful than
the classical MLP and has the potential to be used in many tasks and
applications.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Consistency of Neural Networks with Regularization [0.0]
This paper proposes the general framework of neural networks with regularization and prove its consistency.
Two types of activation functions: hyperbolic function(Tanh) and rectified linear unit(ReLU) have been taken into consideration.
arXiv Detail & Related papers (2022-06-22T23:33:39Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Effective and Efficient Computation with Multiple-timescale Spiking
Recurrent Neural Networks [0.9790524827475205]
We show how a novel type of adaptive spiking recurrent neural network (SRNN) is able to achieve state-of-the-art performance.
We calculate a $>$100x energy improvement for our SRNNs over classical RNNs on the harder tasks.
arXiv Detail & Related papers (2020-05-24T01:04:53Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z) - Investigation and Analysis of Hyper and Hypo neuron pruning to
selectively update neurons during Unsupervised Adaptation [8.845660219190298]
Pruning approaches look for low-salient neurons that are less contributive to a model's decision.
This work investigates if pruning approaches are successful in detecting neurons that are either high-salient (mostly active or hyper) or low-salient (barely active or hypo)
It shows that it may be possible to selectively adapt certain neurons (consisting of the hyper and the hypo neurons) first, followed by a full-network fine tuning.
arXiv Detail & Related papers (2020-01-06T19:46:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.