A Normative and Biologically Plausible Algorithm for Independent
Component Analysis
- URL: http://arxiv.org/abs/2111.08858v1
- Date: Wed, 17 Nov 2021 01:43:42 GMT
- Title: A Normative and Biologically Plausible Algorithm for Independent
Component Analysis
- Authors: Yanis Bahroun, Dmitri B Chklovskii, Anirvan M Sengupta
- Abstract summary: In signal processing, linear blind source separation problems are often solved by Independent Component Analysis (ICA)
To serve as a model of a biological circuit, the ICA neural network (NN) must satisfy at least the following requirements.
We propose a novel objective function for ICA from which we derive a biologically plausible NN, including both the neural architecture and the synaptic learning rules.
- Score: 15.082715993594121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The brain effortlessly solves blind source separation (BSS) problems, but the
algorithm it uses remains elusive. In signal processing, linear BSS problems
are often solved by Independent Component Analysis (ICA). To serve as a model
of a biological circuit, the ICA neural network (NN) must satisfy at least the
following requirements: 1. The algorithm must operate in the online setting
where data samples are streamed one at a time, and the NN computes the sources
on the fly without storing any significant fraction of the data in memory. 2.
The synaptic weight update is local, i.e., it depends only on the biophysical
variables present in the vicinity of a synapse. Here, we propose a novel
objective function for ICA from which we derive a biologically plausible NN,
including both the neural architecture and the synaptic learning rules.
Interestingly, our algorithm relies on modulating synaptic plasticity by the
total activity of the output neurons. In the brain, this could be accomplished
by neuromodulators, extracellular calcium, local field potential, or nitric
oxide.
Related papers
- The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Neuromorphic Algorithm-hardware Codesign for Temporal Pattern Learning [11.781094547718595]
We derive an efficient training algorithm for Leaky Integrate and Fire neurons, which is capable of training a SNN to learn complex spatial temporal patterns.
We have developed a CMOS circuit implementation for a memristor-based network of neuron and synapses which retains critical neural dynamics with reduced complexity.
arXiv Detail & Related papers (2021-04-21T18:23:31Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - Biologically plausible single-layer networks for nonnegative independent
component analysis [21.646490546361935]
We seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm.
For biological plausibility, we require the network to satisfy the following three basic properties of neuronal circuits.
arXiv Detail & Related papers (2020-10-23T19:31:49Z) - A biologically plausible neural network for multi-channel Canonical
Correlation Analysis [12.940770779756482]
Cortical pyramidal neurons receive inputs from multiple neural populations and integrate these inputs in separate dendritic compartments.
We seek a multi-channel CCA algorithm that can be implemented in a biologically plausible neural network.
For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.
arXiv Detail & Related papers (2020-10-01T16:17:53Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.