Storing Encoded Episodes as Concepts for Continual Learning
- URL: http://arxiv.org/abs/2007.06637v1
- Date: Fri, 26 Jun 2020 04:15:56 GMT
- Title: Storing Encoded Episodes as Concepts for Continual Learning
- Authors: Ali Ayub, Alan R. Wagner
- Abstract summary: Two main challenges faced by continual learning approaches are catastrophic forgetting and memory limitations on the storage of data.
We propose a cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images.
Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
- Score: 22.387008072671005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The two main challenges faced by continual learning approaches are
catastrophic forgetting and memory limitations on the storage of data. To cope
with these challenges, we propose a novel, cognitively-inspired approach which
trains autoencoders with Neural Style Transfer to encode and store images.
Reconstructed images from encoded episodes are replayed when training the
classifier model on a new task to avoid catastrophic forgetting. The loss
function for the reconstructed images is weighted to reduce its effect during
classifier training to cope with image degradation. When the system runs out of
memory the encoded episodes are converted into centroids and covariance
matrices, which are used to generate pseudo-images during classifier training,
keeping classifier performance stable with less memory. Our approach increases
classification accuracy by 13-17% over state-of-the-art methods on benchmark
datasets, while requiring 78% less storage space.
Related papers
- HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression [51.04820313355164]
HyrbidFlow combines the continuous-feature-based and codebook-based streams to achieve both high perceptual quality and high fidelity under extreme lows.
Experimental results demonstrate superior performance across several datasets under extremely lows.
arXiv Detail & Related papers (2024-04-20T13:19:08Z) - How Much Training Data is Memorized in Overparameterized Autoencoders? An Inverse Problem Perspective on Memorization Evaluation [1.573034584191491]
We propose an inverse problem perspective for the study of memorization.
We use the trained autoencoder to implicitly define a regularizer for the particular training dataset that we aim to retrieve from.
We show that our method significantly outperforms previous memorization-evaluation methods that recover training data from autoencoders.
arXiv Detail & Related papers (2023-10-04T15:36:33Z) - Masked Autoencoders are Efficient Class Incremental Learners [64.90846899051164]
Class Incremental Learning (CIL) aims to sequentially learn new classes while avoiding catastrophic forgetting of previous knowledge.
We propose to use Masked Autoencoders (MAEs) as efficient learners for CIL.
arXiv Detail & Related papers (2023-08-24T02:49:30Z) - DiffusePast: Diffusion-based Generative Replay for Class Incremental
Semantic Segmentation [73.54038780856554]
Class Incremental Semantic (CISS) extends the traditional segmentation task by incrementally learning newly added classes.
Previous work has introduced generative replay, which involves replaying old class samples generated from a pre-trained GAN.
We propose DiffusePast, a novel framework featuring a diffusion-based generative replay module that generates semantically accurate images with more reliable masks guided by different instructions.
arXiv Detail & Related papers (2023-08-02T13:13:18Z) - SC-VAE: Sparse Coding-based Variational Autoencoder with Learned ISTA [0.6770292596301478]
We introduce a new VAE variant, termed sparse coding-based VAE with learned ISTA (SC-VAE), which integrates sparse coding within variational autoencoder framework.
Experiments on two image datasets demonstrate that our model achieves improved image reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-29T13:18:33Z) - Memory-Based Label-Text Tuning for Few-Shot Class-Incremental Learning [20.87638654650383]
We propose leveraging the label-text information by adopting the memory prompt.
The memory prompt can learn new data sequentially, and meanwhile store the previous knowledge.
Experiments show that our proposed method outperforms all prior state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-03T13:15:45Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - EEC: Learning to Encode and Regenerate Images for Continual Learning [9.89901717499058]
We train autoencoders with Neural Style Transfer to encode and store images.
reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting.
Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2021-01-13T06:43:10Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z) - Content Adaptive and Error Propagation Aware Deep Video Compression [110.31693187153084]
We propose a content adaptive and error propagation aware video compression system.
Our method employs a joint training strategy by considering the compression performance of multiple consecutive frames instead of a single frame.
Instead of using the hand-crafted coding modes in the traditional compression systems, we design an online encoder updating scheme in our system.
arXiv Detail & Related papers (2020-03-25T09:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.