Distinguishing Learning Rules with Brain Machine Interfaces
- URL: http://arxiv.org/abs/2206.13448v1
- Date: Mon, 27 Jun 2022 16:58:30 GMT
- Title: Distinguishing Learning Rules with Brain Machine Interfaces
- Authors: Jacob P. Portes, Christian Schmid, James M. Murray
- Abstract summary: We consider biologically plausible supervised- and reinforcement-learning rules.
We derive a metric to distinguish between learning rules by observing changes in the network activity during learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite extensive theoretical work on biologically plausible learning rules,
it has been difficult to obtain clear evidence about whether and how such rules
are implemented in the brain. We consider biologically plausible supervised-
and reinforcement-learning rules and ask whether changes in network activity
during learning can be used to determine which learning rule is being used.
Supervised learning requires a credit-assignment model estimating the mapping
from neural activity to behavior, and, in a biological organism, this model
will inevitably be an imperfect approximation of the ideal mapping, leading to
a bias in the direction of the weight updates relative to the true gradient.
Reinforcement learning, on the other hand, requires no credit-assignment model
and tends to make weight updates following the true gradient direction. We
derive a metric to distinguish between learning rules by observing changes in
the network activity during learning, given that the mapping from brain to
behavior is known by the experimenter. Because brain-machine interface (BMI)
experiments allow for perfect knowledge of this mapping, we focus on modeling a
cursor-control BMI task using recurrent neural networks, showing that learning
rules can be distinguished in simulated experiments using only observations
that a neuroscience experimenter would plausibly have access to.
Related papers
- Towards Biologically Plausible Computing: A Comprehensive Comparison [24.299920289520013]
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning.
The biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training.
In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet.
arXiv Detail & Related papers (2024-06-23T09:51:20Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - The brain as a probabilistic transducer: an evolutionarily plausible
network architecture for knowledge representation, computation, and behavior [14.505867475659274]
We offer a general theoretical framework for brain and behavior that is evolutionarily and computationally plausible.
The brain in our abstract model is a network of nodes and edges. Both nodes and edges in our network have weights and activation levels.
By specifying the innate (genetic) components of the network, we show how evolution could endow the network with initial adaptive rules and goals that are then enriched through learning.
arXiv Detail & Related papers (2021-12-26T14:37:47Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Identifying Learning Rules From Neural Network Observables [26.96375335939315]
We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes.
Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities, may provide a good basis on which to identify learning rules.
arXiv Detail & Related papers (2020-10-22T14:36:54Z) - Local plasticity rules can learn deep representations using
self-supervised contrastive predictions [3.6868085124383616]
Learning rules that respect biological constraints, yet yield deep hierarchical representations are still unknown.
We propose a learning rule that takes inspiration from neuroscience and recent advances in self-supervised deep learning.
We find that networks trained with this self-supervised and local rule build deep hierarchical representations of images, speech and video.
arXiv Detail & Related papers (2020-10-16T09:32:35Z) - Learning to Learn with Feedback and Local Plasticity [9.51828574518325]
We employ meta-learning to discover networks that learn using feedback connections and local, biologically inspired learning rules.
Our experiments show that meta-trained networks effectively use feedback connections to perform online credit assignment in multi-layer architectures.
arXiv Detail & Related papers (2020-06-16T22:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.