Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class
Incremental Learning
- URL: http://arxiv.org/abs/2310.20052v1
- Date: Mon, 30 Oct 2023 22:16:26 GMT
- Title: Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class
Incremental Learning
- Authors: Anton Lee and Yaqian Zhang and Heitor Murilo Gomes and Albert Bifet
and Bernhard Pfahringer
- Abstract summary: Continual learning aims to create artificial neural networks capable of accumulating knowledge and skills through incremental training on a sequence of tasks.
The main challenge of continual learning is catastrophic interference, wherein new knowledge overrides or interferes with past knowledge, leading to forgetting.
A proposed solution, SurpriseNet, addresses catastrophic interference by employing a parameter isolation method and learning cross-task knowledge using an auto-encoder inspired by anomaly detection.
- Score: 14.529164755845688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continual learning aims to create artificial neural networks capable of
accumulating knowledge and skills through incremental training on a sequence of
tasks. The main challenge of continual learning is catastrophic interference,
wherein new knowledge overrides or interferes with past knowledge, leading to
forgetting. An associated issue is the problem of learning "cross-task
knowledge," where models fail to acquire and retain knowledge that helps
differentiate classes across task boundaries. A common solution to both
problems is "replay," where a limited buffer of past instances is utilized to
learn cross-task knowledge and mitigate catastrophic interference. However, a
notable drawback of these methods is their tendency to overfit the limited
replay buffer. In contrast, our proposed solution, SurpriseNet, addresses
catastrophic interference by employing a parameter isolation method and
learning cross-task knowledge using an auto-encoder inspired by anomaly
detection. SurpriseNet is applicable to both structured and unstructured data,
as it does not rely on image-specific inductive biases. We have conducted
empirical experiments demonstrating the strengths of SurpriseNet on various
traditional vision continual-learning benchmarks, as well as on structured data
datasets. Source code made available at https://doi.org/10.5281/zenodo.8247906
and https://github.com/tachyonicClock/SurpriseNet-CIKM-23
Related papers
- Replay Consolidation with Label Propagation for Continual Object Detection [7.454468349023651]
Continual Learning for Object Detection poses additional difficulties compared to CL for Classification.
In CLOD, images from previous tasks may contain unknown classes that could reappear labeled in future tasks.
We propose a novel technique to solve CLOD called Replay Consolidation with Label Propagation for Object Detection.
arXiv Detail & Related papers (2024-09-09T14:16:27Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - Negotiated Representations to Prevent Forgetting in Machine Learning
Applications [0.0]
Catastrophic forgetting is a significant challenge in the field of machine learning.
We propose a novel method for preventing catastrophic forgetting in machine learning applications.
arXiv Detail & Related papers (2023-11-30T22:43:50Z) - Online Continual Learning via the Knowledge Invariant and Spread-out
Properties [4.109784267309124]
Key challenge in continual learning is catastrophic forgetting.
We propose a new method, named Online Continual Learning via the Knowledge Invariant and Spread-out Properties (OCLKISP)
We empirically evaluate our proposed method on four popular benchmarks for continual learning: Split CIFAR 100, Split SVHN, Split CUB200 and Split Tiny-Image-Net.
arXiv Detail & Related papers (2023-02-02T04:03:38Z) - Learning with Recoverable Forgetting [77.56338597012927]
Learning wIth Recoverable Forgetting explicitly handles the task- or sample-specific knowledge removal and recovery.
Specifically, LIRF brings in two innovative schemes, namely knowledge deposit and withdrawal.
We conduct experiments on several datasets, and demonstrate that the proposed LIRF strategy yields encouraging results with gratifying generalization capability.
arXiv Detail & Related papers (2022-07-17T16:42:31Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Anomaly Detection in Video via Self-Supervised and Multi-Task Learning [113.81927544121625]
Anomaly detection in video is a challenging computer vision problem.
In this paper, we approach anomalous event detection in video through self-supervised and multi-task learning at the object level.
arXiv Detail & Related papers (2020-11-15T10:21:28Z) - Continual Learning: Tackling Catastrophic Forgetting in Deep Neural
Networks with Replay Processes [0.0]
Continual algorithms are designed to accumulate and improve knowledge in a curriculum of learning-experiences without forgetting.
Generative Replay consists of regenerating past learning experiences with a generative model to remember them.
We show that they are very promising methods for continual learning.
arXiv Detail & Related papers (2020-07-01T13:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.