Correlative Information Maximization Based Biologically Plausible Neural
Networks for Correlated Source Separation
- URL: http://arxiv.org/abs/2210.04222v2
- Date: Sat, 8 Apr 2023 08:58:38 GMT
- Title: Correlative Information Maximization Based Biologically Plausible Neural
Networks for Correlated Source Separation
- Authors: Bariscan Bozkurt, Ates Isfendiyaroglu, Cengiz Pehlevan, Alper T.
Erdogan
- Abstract summary: We propose a biologically plausible neural network that extracts correlated latent sources by exploiting information about their domains.
Online formulation of this optimization problem naturally leads to neural networks with local learning rules.
Choices of simplex or polytopic source domains result in networks with piecewise-linear activation functions.
- Score: 17.740376367999705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The brain effortlessly extracts latent causes of stimuli, but how it does
this at the network level remains unknown. Most prior attempts at this problem
proposed neural networks that implement independent component analysis which
works under the limitation that latent causes are mutually independent. Here,
we relax this limitation and propose a biologically plausible neural network
that extracts correlated latent sources by exploiting information about their
domains. To derive this network, we choose maximum correlative information
transfer from inputs to outputs as the separation objective under the
constraint that the outputs are restricted to their presumed sets. The online
formulation of this optimization problem naturally leads to neural networks
with local learning rules. Our framework incorporates infinitely many source
domain choices and flexibly models complex latent structures. Choices of
simplex or polytopic source domains result in networks with piecewise-linear
activation functions. We provide numerical examples to demonstrate the superior
correlated source separation capability for both synthetic and natural sources.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.
We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.
We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Neural Bayesian Network Understudy [13.28673601999793]
We show that a neural network can be trained to output conditional probabilities, providing approximately the same functionality as a Bayesian Network.
We propose two training strategies that allow encoding the independence relations inferred from a given causal structure into the neural network.
arXiv Detail & Related papers (2022-11-15T15:56:51Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Biologically-Plausible Determinant Maximization Neural Networks for
Blind Separation of Correlated Sources [19.938405188113027]
We propose novel biologically-plausible neural networks for the blind separation of potentially dependent/correlated sources.
We derive two-layer biologically-plausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains.
We demonstrate that our algorithms outperform other biologically-plausible BSS algorithms on correlated source separation problems.
arXiv Detail & Related papers (2022-09-27T09:12:10Z) - Decomposing neural networks as mappings of correlation functions [57.52754806616669]
We study the mapping between probability distributions implemented by a deep feed-forward network.
We identify essential statistics in the data, as well as different information representations that can be used by neural networks.
arXiv Detail & Related papers (2022-02-10T09:30:31Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Biologically plausible single-layer networks for nonnegative independent
component analysis [21.646490546361935]
We seek a biologically plausible single-layer neural network implementation of a blind source separation algorithm.
For biological plausibility, we require the network to satisfy the following three basic properties of neuronal circuits.
arXiv Detail & Related papers (2020-10-23T19:31:49Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Blind Bounded Source Separation Using Neural Networks with Local
Learning Rules [23.554584457413483]
We propose a new optimization problem, Bounded Similarity Matching (BSM)
A principled derivation of an adaptive BSM algorithm leads to a recurrent neural network with a clipping nonlinearity.
The network adapts by local learning rules, satisfying an important constraint for both biological plausibility and implementability in neuromorphic hardware.
arXiv Detail & Related papers (2020-04-11T20:20:22Z) - Neural Rule Ensembles: Encoding Sparse Feature Interactions into Neural
Networks [3.7277730514654555]
We use decision trees to capture relevant features and their interactions and define a mapping to encode extracted relationships into a neural network.
At the same time through feature selection it enables learning of compact representations compared to state of the art tree-based approaches.
arXiv Detail & Related papers (2020-02-11T11:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.