A Novel ANN Structure for Image Recognition
- URL: http://arxiv.org/abs/2010.04586v1
- Date: Fri, 9 Oct 2020 14:07:29 GMT
- Title: A Novel ANN Structure for Image Recognition
- Authors: Shilpa Mayannavar, Uday Wali, and V M Aparanji
- Abstract summary: The paper presents Multi-layer Auto Resonance Networks (ARN), a new neural model, for image recognition.
Neurons in ARN, called Nodes, latch on to an incoming pattern and resonate when the input is within its 'coverage'
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper presents Multi-layer Auto Resonance Networks (ARN), a new neural
model, for image recognition. Neurons in ARN, called Nodes, latch on to an
incoming pattern and resonate when the input is within its 'coverage.'
Resonance allows the neuron to be noise tolerant and tunable. Coverage of nodes
gives them an ability to approximate the incoming pattern. Its latching
characteristics allow it to respond to episodic events without disturbing the
existing trained network. These networks are capable of addressing problems in
varied fields but have not been sufficiently explored. Implementation of an
image classification and identification system using two-layer ARN is discussed
in this paper. Recognition accuracy of 94% has been achieved for MNIST dataset
with only two layers of neurons and just 50 samples per numeral, making it
useful in computing at the edge of cloud infrastructure.
Related papers
- When Spiking neural networks meet temporal attention image decoding and adaptive spiking neuron [7.478056407323783]
Spiking Neural Networks (SNNs) are capable of encoding and processing temporal information in a biologically plausible way.
We propose a novel method for image decoding based on temporal attention (TAID) and an adaptive Leaky-Integrate-and-Fire neuron model.
arXiv Detail & Related papers (2024-06-05T08:21:55Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - NeRN -- Learning Neural Representations for Neural Networks [3.7384109981836153]
We show that, when adapted correctly, neural representations can be used to represent the weights of a pre-trained convolutional neural network.
Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network.
We present two applications using NeRN, demonstrating the capabilities of the learned representations.
arXiv Detail & Related papers (2022-12-27T17:14:44Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Information contraction in noisy binary neural networks and its
implications [11.742803725197506]
We consider noisy binary neural networks, where each neuron has a non-zero probability of producing an incorrect output.
Our key finding is a lower bound for the required number of neurons in noisy neural networks, which is first of its kind.
This paper offers new understanding of noisy information processing systems through the lens of information theory.
arXiv Detail & Related papers (2021-01-28T00:01:45Z) - Visual Pattern Recognition with on On-chip Learning: towards a Fully
Neuromorphic Approach [10.181725314550823]
We present a spiking neural network (SNN) for visual pattern recognition with on-chip learning on neuromorphic hardware.
We show how this network can learn simple visual patterns composed of horizontal and vertical bars sensed by a Dynamic Vision Sensor.
During recognition, the network classifies the pattern's identity while at the same time estimating its location and scale.
arXiv Detail & Related papers (2020-08-08T08:07:36Z) - On Tractable Representations of Binary Neural Networks [23.50970665150779]
We consider the compilation of a binary neural network's decision function into tractable representations such as Ordered Binary Decision Diagrams (OBDDs) and Sentential Decision Diagrams (SDDs)
In experiments, we show that it is feasible to obtain compact representations of neural networks as SDDs.
arXiv Detail & Related papers (2020-04-05T03:21:26Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.