Biologically plausible single-layer networks for nonnegative independent
component analysis
- URL: http://arxiv.org/abs/2010.12632v2
- Date: Fri, 4 Mar 2022 20:14:57 GMT
- Title: Biologically plausible single-layer networks for nonnegative independent
component analysis
- Authors: David Lipshutz, Cengiz Pehlevan, Dmitri B. Chklovskii
- Abstract summary: We seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm.
For biological plausibility, we require the network to satisfy the following three basic properties of neuronal circuits.
- Score: 21.646490546361935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An important problem in neuroscience is to understand how brains extract
relevant signals from mixtures of unknown sources, i.e., perform blind source
separation. To model how the brain performs this task, we seek a biologically
plausible single-layer neural network implementation of a blind source
separation algorithm. For biological plausibility, we require the network to
satisfy the following three basic properties of neuronal circuits: (i) the
network operates in the online setting; (ii) synaptic learning rules are local;
(iii) neuronal outputs are nonnegative. Closest is the work by Pehlevan et al.
[Neural Computation, 29, 2925--2954 (2017)], which considers Nonnegative
Independent Component Analysis (NICA), a special case of blind source
separation that assumes the mixture is a linear combination of uncorrelated,
nonnegative sources. They derive an algorithm with a biologically plausible
2-layer network implementation. In this work, we improve upon their result by
deriving 2 algorithms for NICA, each with a biologically plausible single-layer
network implementation. The first algorithm maps onto a network with indirect
lateral connections mediated by interneurons. The second algorithm maps onto a
network with direct lateral connections and multi-compartmental output neurons.
Related papers
- Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Neural Bayesian Network Understudy [13.28673601999793]
We show that a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network.
We propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network.
arXiv Detail & Related papers (2022-11-15T15:56:51Z) - Collaboration between parallel connected neural networks -- A possible
criterion for distinguishing artificial neural networks from natural organs [0.0]
We show that when artificial neural networks are connected in parallel and trained together, they display the following properties.
These properties are unlikely for natural biological sense organs.
We further show that when serving as the activation function, the ReLU function can make an artificial neural network more bionic than the sigmoid and Tanh functions do.
arXiv Detail & Related papers (2022-08-21T23:18:28Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - A Normative and Biologically Plausible Algorithm for Independent
Component Analysis [15.082715993594121]
In signal processing, linear blind source separation problems are often solved by Independent Component Analysis (ICA)
To serve as a model of a biological circuit, the ICA neural network (NN) must satisfy at least the following requirements.
We propose a novel objective function for ICA from which we derive a biologically plausible NN, including both the neural architecture and the synaptic learning rules.
arXiv Detail & Related papers (2021-11-17T01:43:42Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A biologically plausible neural network for local supervision in
cortical microcircuits [17.00937011213428]
We derive an algorithm for training a neural network which avoids explicit error and backpropagation.
Our algorithm maps onto a neural network that bears a remarkable resemblance to the connectivity structure and learning rules of the cortex.
arXiv Detail & Related papers (2020-11-30T17:35:22Z) - A biologically plausible neural network for multi-channel Canonical
Correlation Analysis [12.940770779756482]
Cortical pyramidal neurons receive inputs from multiple neural populations and integrate these inputs in separate dendritic compartments.
We seek a multi-channel CCA algorithm that can be implemented in a biologically plausible neural network.
For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.
arXiv Detail & Related papers (2020-10-01T16:17:53Z) - Exploiting Heterogeneity in Operational Neural Networks by Synaptic
Plasticity [87.32169414230822]
Recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs)
In this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons.
Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs.
arXiv Detail & Related papers (2020-08-21T19:03:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.