Deep Learning in Random Neural Fields: Numerical Experiments via Neural
Tangent Kernel
- URL: http://arxiv.org/abs/2202.05254v1
- Date: Thu, 10 Feb 2022 18:57:10 GMT
- Title: Deep Learning in Random Neural Fields: Numerical Experiments via Neural
Tangent Kernel
- Authors: Kaito Watanabe, Kotaro Sakamoto, Ryo Karakida, Sho Sonoda, Shun-ichi
Amari
- Abstract summary: A biological neural network in the cortex forms a neural field.
Neurons in the field have their own receptive fields, and connection weights between two neurons are random but highly correlated when they are in close proximity in receptive fields.
We show that such a multilayer neural field is more robust than conventional models when input patterns are deformed by noise disturbances.
- Score: 10.578941575914516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A biological neural network in the cortex forms a neural field. Neurons in
the field have their own receptive fields, and connection weights between two
neurons are random but highly correlated when they are in close proximity in
receptive fields. In this paper, we investigate such neural fields in a
multilayer architecture to investigate the supervised learning of the fields.
We empirically compare the performances of our field model with those of
randomly connected deep networks. The behavior of a randomly connected network
is investigated on the basis of the key idea of the neural tangent kernel
regime, a recent development in the machine learning theory of
over-parameterized networks; for most randomly connected neural networks, it is
shown that global minima always exist in their small neighborhoods. We
numerically show that this claim also holds for our neural fields. In more
detail, our model has two structures: i) each neuron in a field has a
continuously distributed receptive field, and ii) the initial connection
weights are random but not independent, having correlations when the positions
of neurons are close in each layer. We show that such a multilayer neural field
is more robust than conventional models when input patterns are deformed by
noise disturbances. Moreover, its generalization ability can be slightly
superior to that of conventional models.
Related papers
- Novel Kernel Models and Exact Representor Theory for Neural Networks Beyond the Over-Parameterized Regime [52.00917519626559]
This paper presents two models of neural-networks and their training applicable to neural networks of arbitrary width, depth and topology.
We also present an exact novel representor theory for layer-wise neural network training with unregularized gradient descent in terms of a local-extrinsic neural kernel (LeNK)
This representor theory gives insight into the role of higher-order statistics in neural network training and the effect of kernel evolution in neural-network kernel models.
arXiv Detail & Related papers (2024-05-24T06:30:36Z) - A Sparse Quantized Hopfield Network for Online-Continual Memory [0.0]
Nervous systems learn online where a stream of noisy data points are presented in a non-independent, identically distributed (non-i.i.d.) way.
Deep networks, on the other hand, typically use non-local learning algorithms and are trained in an offline, non-noisy, i.i.d. setting.
We implement this kind of model in a novel neural network called the Sparse Quantized Hopfield Network (SQHN)
arXiv Detail & Related papers (2023-07-27T17:46:17Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Why Quantization Improves Generalization: NTK of Binary Weight Neural
Networks [33.08636537654596]
We take the binary weights in a neural network as random variables under rounding, and study the distribution propagation over different layers in the neural network.
We propose a quasi neural network to approximate the distribution propagation, which is a neural network with continuous parameters and smooth activation function.
arXiv Detail & Related papers (2022-06-13T06:11:21Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Super Neurons [18.710336981941147]
Self-Organized Operational Neural Networks (Self-ONNs) have been proposed as new-generation neural network models with nonlinear learning units.
Self-ONNs have a common drawback: localized (fixed) kernel operations.
This article presents superior (generative) neuron models that allow random or learnable kernel shifts.
arXiv Detail & Related papers (2021-08-03T16:17:45Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Deep Randomized Neural Networks [12.333836441649343]
Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed.
This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks.
arXiv Detail & Related papers (2020-02-27T17:57:58Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.