Avoiding Catastrophic Forgetting in Visual Classification Using Human
Concept Formation
- URL: http://arxiv.org/abs/2402.16933v1
- Date: Mon, 26 Feb 2024 17:20:16 GMT
- Title: Avoiding Catastrophic Forgetting in Visual Classification Using Human
Concept Formation
- Authors: Nicki Barari, Xin Lian, Christopher J. MacLellan
- Abstract summary: We propose Cobweb4V, a novel visual classification approach that builds on Cobweb, a human like learning system.
In this research, we conduct a comprehensive evaluation, showcasing the proficiency of Cobweb4V in learning visual concepts.
These characteristics align with learning strategies in human cognition, positioning Cobweb4V as a promising alternative to neural network approaches.
- Score: 0.8159711103888622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have excelled in machine learning, particularly in
vision tasks, however, they often suffer from catastrophic forgetting when
learning new tasks sequentially. In this work, we propose Cobweb4V, a novel
visual classification approach that builds on Cobweb, a human like learning
system that is inspired by the way humans incrementally learn new concepts over
time. In this research, we conduct a comprehensive evaluation, showcasing the
proficiency of Cobweb4V in learning visual concepts, requiring less data to
achieve effective learning outcomes compared to traditional methods,
maintaining stable performance over time, and achieving commendable asymptotic
behavior, without catastrophic forgetting effects. These characteristics align
with learning strategies in human cognition, positioning Cobweb4V as a
promising alternative to neural network approaches.
Related papers
- Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Degraded Polygons Raise Fundamental Questions of Neural Network Perception [5.423100066629618]
We revisit the task of recovering images under degradation, first introduced over 30 years ago in the Recognition-by-Components theory of human vision.
We implement the Automated Shape Recoverability Test for rapidly generating large-scale datasets of perimeter-degraded regular polygons.
We find that neural networks' behavior on this simple task conflicts with human behavior.
arXiv Detail & Related papers (2023-06-08T06:02:39Z) - Activation Learning by Local Competitions [4.441866681085516]
We develop a biology-inspired learning rule that discovers features by local competitions among neurons.
It is demonstrated that the unsupervised features learned by this local learning rule can serve as a pre-training model.
arXiv Detail & Related papers (2022-09-26T10:43:29Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Wide Neural Networks Forget Less Catastrophically [39.907197907411266]
We study the impact of "width" of the neural network architecture on catastrophic forgetting.
We study the learning dynamics of the network from various perspectives.
arXiv Detail & Related papers (2021-10-21T23:49:23Z) - Training Spiking Neural Networks Using Lessons From Deep Learning [28.827506468167652]
The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like.
Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here.
A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available.
arXiv Detail & Related papers (2021-09-27T09:28:04Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.