ACAE-REMIND for Online Continual Learning with Compressed Feature Replay
- URL: http://arxiv.org/abs/2105.08595v1
- Date: Tue, 18 May 2021 15:27:51 GMT
- Title: ACAE-REMIND for Online Continual Learning with Compressed Feature Replay
- Authors: Kai Wang, Luis Herranz, Joost van de Weijer
- Abstract summary: We propose an auxiliary classifier auto-encoder (ACAE) module for feature replay at intermediate layers with high compression rates.
The reduced memory footprint per image allows us to save more exemplars for replay.
In our experiments, we conduct task-agnostic evaluation under online continual learning setting.
- Score: 47.73014647702813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online continual learning aims to learn from a non-IID stream of data from a
number of different tasks, where the learner is only allowed to consider data
once. Methods are typically allowed to use a limited buffer to store some of
the images in the stream. Recently, it was found that feature replay, where an
intermediate layer representation of the image is stored (or generated) leads
to superior results than image replay, while requiring less memory. Quantized
exemplars can further reduce the memory usage. However, a drawback of these
methods is that they use a fixed (or very intransigent) backbone network. This
significantly limits the learning of representations that can discriminate
between all tasks. To address this problem, we propose an auxiliary classifier
auto-encoder (ACAE) module for feature replay at intermediate layers with high
compression rates. The reduced memory footprint per image allows us to save
more exemplars for replay. In our experiments, we conduct task-agnostic
evaluation under online continual learning setting and get state-of-the-art
performance on ImageNet-Subset, CIFAR100 and CIFAR10 dataset.
Related papers
- Masked Autoencoders are Efficient Class Incremental Learners [64.90846899051164]
Class Incremental Learning (CIL) aims to sequentially learn new classes while avoiding catastrophic forgetting of previous knowledge.
We propose to use Masked Autoencoders (MAEs) as efficient learners for CIL.
arXiv Detail & Related papers (2023-08-24T02:49:30Z) - Summarizing Stream Data for Memory-Constrained Online Continual Learning [17.40956484727636]
We propose to Summarize the knowledge from the Stream Data (SSD) into more informative samples by distilling the training characteristics of real images.
We demonstrate that with limited extra computational overhead, SSD provides more than 3% accuracy boost for sequential CIFAR-100 under extremely restricted memory buffer.
arXiv Detail & Related papers (2023-05-26T05:31:51Z) - Improving Image Recognition by Retrieving from Web-Scale Image-Text Data [68.63453336523318]
We introduce an attention-based memory module, which learns the importance of each retrieved example from the memory.
Compared to existing approaches, our method removes the influence of the irrelevant retrieved examples, and retains those that are beneficial to the input query.
We show that it achieves state-of-the-art accuracies in ImageNet-LT, Places-LT and Webvision datasets.
arXiv Detail & Related papers (2023-04-11T12:12:05Z) - Recurrent Dynamic Embedding for Video Object Segmentation [54.52527157232795]
We propose a Recurrent Dynamic Embedding (RDE) to build a memory bank of constant size.
We propose an unbiased guidance loss during the training stage, which makes SAM more robust in long videos.
We also design a novel self-correction strategy so that the network can repair the embeddings of masks with different qualities in the memory bank.
arXiv Detail & Related papers (2022-05-08T02:24:43Z) - An Empirical Study of Remote Sensing Pretraining [117.90699699469639]
We conduct an empirical study of remote sensing pretraining (RSP) on aerial images.
RSP can help deliver distinctive performances in scene recognition tasks.
RSP mitigates the data discrepancies of traditional ImageNet pretraining on RS images, but it may still suffer from task discrepancies.
arXiv Detail & Related papers (2022-04-06T13:38:11Z) - Memory Replay with Data Compression for Continual Learning [80.95444077825852]
We propose memory replay with data compression to reduce the storage cost of old training samples.
We extensively validate this across several benchmarks of class-incremental learning and in a realistic scenario of object detection for autonomous driving.
arXiv Detail & Related papers (2022-02-14T10:26:23Z) - Match What Matters: Generative Implicit Feature Replay for Continual
Learning [0.0]
We propose GenIFeR (Generative Implicit Feature Replay) for class-incremental learning.
The main idea is to train a generative adversarial network (GAN) to generate images that contain realistic features.
We empirically show that GenIFeR is superior to both conventional generative image and feature replay.
arXiv Detail & Related papers (2021-06-09T19:29:41Z) - EEC: Learning to Encode and Regenerate Images for Continual Learning [9.89901717499058]
We train autoencoders with Neural Style Transfer to encode and store images.
reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting.
Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets.
arXiv Detail & Related papers (2021-01-13T06:43:10Z) - The Effectiveness of Memory Replay in Large Scale Continual Learning [42.67483945072039]
We study continual learning in the large scale setting where tasks in the input sequence are not limited to classification, and the outputs can be of high dimension.
Existing methods usually replay only the input-output pairs.
We propose to replay the activation of the intermediate layers in addition to the input-output pairs.
arXiv Detail & Related papers (2020-10-06T01:23:12Z) - Storing Encoded Episodes as Concepts for Continual Learning [22.387008072671005]
Two main challenges faced by continual learning approaches are catastrophic forgetting and memory limitations on the storage of data.
We propose a cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images.
Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
arXiv Detail & Related papers (2020-06-26T04:15:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.