A Neural Network with Local Learning Rules for Minor Subspace Analysis
- URL: http://arxiv.org/abs/2102.05501v1
- Date: Wed, 10 Feb 2021 15:44:27 GMT
- Title: A Neural Network with Local Learning Rules for Minor Subspace Analysis
- Authors: Yanis Bahroun and Dmitri B. Chklovskii
- Abstract summary: We introduce a novel similarity matching objective for extracting the minor subspace, Minor Subspace Similarity Matching (MSSM)
We derive an adaptive MSSM algorithm that naturally maps onto a novel neural network with local learning rules and gives numerical results showing that our method converges at a competitive rate.
- Score: 12.437226707039446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of neuromorphic hardware and modeling of biological neural
networks requires algorithms with local learning rules. Artificial neural
networks using local learning rules to perform principal subspace analysis
(PSA) and clustering have recently been derived from principled objective
functions. However, no biologically plausible networks exist for minor subspace
analysis (MSA), a fundamental signal processing task. MSA extracts the
lowest-variance subspace of the input signal covariance matrix. Here, we
introduce a novel similarity matching objective for extracting the minor
subspace, Minor Subspace Similarity Matching (MSSM). Moreover, we derive an
adaptive MSSM algorithm that naturally maps onto a novel neural network with
local learning rules and gives numerical results showing that our method
converges at a competitive rate.
Related papers
- Leveraging chaos in the training of artificial neural networks [3.379574469735166]
We explore the dynamics of the neural network trajectory along training for unconventionally large learning rates.<n>We show that for a region of values of the learning rate, the GD optimization shifts away from purely exploitation-like algorithm into a regime of exploration-exploitation balance.
arXiv Detail & Related papers (2025-06-10T07:41:58Z) - Topological Representations of Heterogeneous Learning Dynamics of Recurrent Spiking Neural Networks [16.60622265961373]
Spiking Neural Networks (SNNs) have become an essential paradigm in neuroscience and artificial intelligence.
Recent advances in literature have studied the network representations of deep neural networks.
arXiv Detail & Related papers (2024-03-19T05:37:26Z) - Analyzing Populations of Neural Networks via Dynamical Model Embedding [10.455447557943463]
A core challenge in the interpretation of deep neural networks is identifying commonalities between the underlying algorithms implemented by distinct networks trained for the same task.
Motivated by this problem, we introduce DYNAMO, an algorithm that constructs low-dimensional manifold where each point corresponds to a neural network model, and two points are nearby if the corresponding neural networks enact similar high-level computational processes.
DYNAMO takes as input a collection of pre-trained neural networks and outputs a meta-model that emulates the dynamics of the hidden states as well as the outputs of any model in the collection.
arXiv Detail & Related papers (2023-02-27T19:00:05Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - A biologically plausible neural network for multi-channel Canonical
Correlation Analysis [12.940770779756482]
Cortical pyramidal neurons receive inputs from multiple neural populations and integrate these inputs in separate dendritic compartments.
We seek a multi-channel CCA algorithm that can be implemented in a biologically plausible neural network.
For biological plausibility, we require that the network operates in the online setting and its synaptic update rules are local.
arXiv Detail & Related papers (2020-10-01T16:17:53Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Blind Bounded Source Separation Using Neural Networks with Local
Learning Rules [23.554584457413483]
We propose a new optimization problem, Bounded Similarity Matching (BSM)
A principled derivation of an adaptive BSM algorithm leads to a recurrent neural network with a clipping nonlinearity.
The network adapts by local learning rules, satisfying an important constraint for both biological plausibility and implementability in neuromorphic hardware.
arXiv Detail & Related papers (2020-04-11T20:20:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.