EEC: Learning to Encode and Regenerate Images for Continual Learning
- URL: http://arxiv.org/abs/2101.04904v2
- Date: Thu, 14 Jan 2021 09:16:24 GMT
- Title: EEC: Learning to Encode and Regenerate Images for Continual Learning
- Authors: Ali Ayub, Alan R. Wagner
- Abstract summary: We train autoencoders with Neural Style Transfer to encode and store images.
reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting.
Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets.
- Score: 9.89901717499058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The two main impediments to continual learning are catastrophic forgetting
and memory limitations on the storage of data. To cope with these challenges,
we propose a novel, cognitively-inspired approach which trains autoencoders
with Neural Style Transfer to encode and store images. During training on a new
task, reconstructed images from encoded episodes are replayed in order to avoid
catastrophic forgetting. The loss function for the reconstructed images is
weighted to reduce its effect during classifier training to cope with image
degradation. When the system runs out of memory the encoded episodes are
converted into centroids and covariance matrices, which are used to generate
pseudo-images during classifier training, keeping classifier performance stable
while using less memory. Our approach increases classification accuracy by
13-17% over state-of-the-art methods on benchmark datasets, while requiring 78%
less storage space.
Related papers
- CALLIC: Content Adaptive Learning for Lossless Image Compression [64.47244912937204]
CALLIC sets a new state-of-the-art (SOTA) for learned lossless image compression.
We propose a content-aware autoregressive self-attention mechanism by leveraging convolutional gating operations.
During encoding, we decompose pre-trained layers, including depth-wise convolutions, using low-rank matrices and then adapt the incremental weights on testing image by Rate-guided Progressive Fine-Tuning (RPFT)
RPFT fine-tunes with gradually increasing patches that are sorted in descending order by estimated entropy, optimizing learning process and reducing adaptation time.
arXiv Detail & Related papers (2024-12-23T10:41:18Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - How Much Training Data is Memorized in Overparameterized Autoencoders? An Inverse Problem Perspective on Memorization Evaluation [1.573034584191491]
We propose an inverse problem perspective for the study of memorization.
We use the trained autoencoder to implicitly define a regularizer for the particular training dataset that we aim to retrieve from.
We show that our method significantly outperforms previous memorization-evaluation methods that recover training data from autoencoders.
arXiv Detail & Related papers (2023-10-04T15:36:33Z) - Masked Autoencoders are Efficient Class Incremental Learners [64.90846899051164]
Class Incremental Learning (CIL) aims to sequentially learn new classes while avoiding catastrophic forgetting of previous knowledge.
We propose to use Masked Autoencoders (MAEs) as efficient learners for CIL.
arXiv Detail & Related papers (2023-08-24T02:49:30Z) - SC-VAE: Sparse Coding-based Variational Autoencoder with Learned ISTA [0.6770292596301478]
We introduce a new VAE variant, termed sparse coding-based VAE with learned ISTA (SC-VAE), which integrates sparse coding within variational autoencoder framework.
Experiments on two image datasets demonstrate that our model achieves improved image reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-29T13:18:33Z) - Memory-Based Label-Text Tuning for Few-Shot Class-Incremental Learning [20.87638654650383]
We propose leveraging the label-text information by adopting the memory prompt.
The memory prompt can learn new data sequentially, and meanwhile store the previous knowledge.
Experiments show that our proposed method outperforms all prior state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-03T13:15:45Z) - Always Be Dreaming: A New Approach for Data-Free Class-Incremental
Learning [73.24988226158497]
We consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL)
We propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation.
Our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks.
arXiv Detail & Related papers (2021-06-17T17:56:08Z) - ACAE-REMIND for Online Continual Learning with Compressed Feature Replay [47.73014647702813]
We propose an auxiliary classifier auto-encoder (ACAE) module for feature replay at intermediate layers with high compression rates.
The reduced memory footprint per image allows us to save more exemplars for replay.
In our experiments, we conduct task-agnostic evaluation under online continual learning setting.
arXiv Detail & Related papers (2021-05-18T15:27:51Z) - Storing Encoded Episodes as Concepts for Continual Learning [22.387008072671005]
Two main challenges faced by continual learning approaches are catastrophic forgetting and memory limitations on the storage of data.
We propose a cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images.
Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
arXiv Detail & Related papers (2020-06-26T04:15:56Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.