Behavioral Experiments for Understanding Catastrophic Forgetting
- URL: http://arxiv.org/abs/2110.10570v2
- Date: Fri, 22 Oct 2021 11:22:11 GMT
- Title: Behavioral Experiments for Understanding Catastrophic Forgetting
- Authors: Samuel J. Bell and Neil D. Lawrence
- Abstract summary: We apply the techniques of experimental psychology to investigating catastrophic forgetting in neural networks.
We present a series of controlled experiments with two-layer ReLU networks, and exploratory results revealing a new understanding of the behavior of catastrophic forgetting.
- Score: 9.679643351149215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we explore whether the fundamental tool of experimental
psychology, the behavioral experiment, has the power to generate insight not
only into humans and animals, but artificial systems too. We apply the
techniques of experimental psychology to investigating catastrophic forgetting
in neural networks. We present a series of controlled experiments with
two-layer ReLU networks, and exploratory results revealing a new understanding
of the behavior of catastrophic forgetting. Alongside our empirical findings,
we demonstrate an alternative, behavior-first approach to investigating neural
network phenomena.
Related papers
- A Fuzzy-based Approach to Predict Human Interaction by Functional Near-Infrared Spectroscopy [25.185426359719454]
The paper introduces a Fuzzy-based Attention (Fuzzy Attention Layer) mechanism, a novel computational approach to interpretability and efficacy of neural models in psychological research.
By leveraging fuzzy logic, the Fuzzy Attention Layer is capable of learning and identifying interpretable patterns of neural activity.
arXiv Detail & Related papers (2024-09-26T09:20:12Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - Exploring Behavior Discovery Methods for Heterogeneous Swarms of
Limited-Capability Robots [9.525230669966415]
We study the problem of determining the emergent behaviors that are possible given a functionally heterogeneous swarm of robots.
To the best of our knowledge, these are the first known emergent behaviors for heterogeneous swarms of computation-free agents.
arXiv Detail & Related papers (2023-10-25T19:20:32Z) - Neural Networks from Biological to Artificial and Vice Versa [6.85316573653194]
Key contribution this paper is the investigation of the impact of a dead neuron on the performance of artificial neural networks (ANNs)
The aim is to assess the potential application of the findings in the biological domain, the expected results may have significant implications for the development of effective treatment strategies for neurological disorders.
arXiv Detail & Related papers (2023-06-05T17:30:07Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - How Do You Act? An Empirical Study to Understand Behavior of Deep
Reinforcement Learning Agents [2.3268634502937937]
The demand for more transparency of decision-making processes of deep reinforcement learning agents is greater than ever.
In this study, we characterize the learned representations of an agent's policy network through its activation space.
We show that the healthy agent's behavior is characterized by a distinct correlation pattern between the network's layer activation and the performed actions.
arXiv Detail & Related papers (2020-04-07T10:08:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.