Memory-Free Generative Replay For Class-Incremental Learning
- URL: http://arxiv.org/abs/2109.00328v1
- Date: Wed, 1 Sep 2021 12:19:54 GMT
- Title: Memory-Free Generative Replay For Class-Incremental Learning
- Authors: Xiaomeng Xin, Yiran Zhong, Yunzhong Hou, Jinjun Wang, Liang Zheng
- Abstract summary: We propose a memory-free generative replay strategy to preserve fine-grained old classes characteristics.
Our method is best complemented by prior regularization-based methods proved to be effective for easily distinguishable old classes.
- Score: 32.39857105540859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Regularization-based methods are beneficial to alleviate the catastrophic
forgetting problem in class-incremental learning. With the absence of old task
images, they often assume that old knowledge is well preserved if the
classifier produces similar output on new images. In this paper, we find that
their effectiveness largely depends on the nature of old classes: they work
well on classes that are easily distinguishable between each other but may fail
on more fine-grained ones, e.g., boy and girl. In spirit, such methods project
new data onto the feature space spanned by the weight vectors in the fully
connected layer, corresponding to old classes. The resulting projections would
be similar on fine-grained old classes, and as a consequence the new classifier
will gradually lose the discriminative ability on these classes. To address
this issue, we propose a memory-free generative replay strategy to preserve the
fine-grained old classes characteristics by generating representative old
images directly from the old classifier and combined with new data for new
classifier training. To solve the homogenization problem of the generated
samples, we also propose a diversity loss that maximizes Kullback Leibler (KL)
divergence between generated samples. Our method is best complemented by prior
regularization-based methods proved to be effective for easily distinguishable
old classes. We validate the above design and insights on CUB-200-2011,
Caltech-101, CIFAR-100 and Tiny ImageNet and show that our strategy outperforms
existing memory-free methods with a clear margin. Code is available at
https://github.com/xmengxin/MFGR
Related papers
- Efficient Non-Exemplar Class-Incremental Learning with Retrospective Feature Synthesis [21.348252135252412]
Current Non-Exemplar Class-Incremental Learning (NECIL) methods mitigate forgetting by storing a single prototype per class.
We propose a more efficient NECIL method that replaces prototypes with synthesized retrospective features for old classes.
Our method significantly improves the efficiency of non-exemplar class-incremental learning and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-03T07:19:11Z) - PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning [49.240408681098906]
Class-incremental learning (CIL) aims to recognize new classes incrementally while maintaining the discriminability of old classes.
Most existing CIL methods are exemplar-based, i.e., storing a part of old data for retraining.
We present a simple and novel dual bias reduction framework that employs self-supervised transformation (SST) in input space and prototype augmentation (protoAug) in deep feature space.
arXiv Detail & Related papers (2024-07-19T05:03:16Z) - Active Generalized Category Discovery [60.69060965936214]
Generalized Category Discovery (GCD) endeavors to cluster unlabeled samples from both novel and old classes.
We take the spirit of active learning and propose a new setting called Active Generalized Category Discovery (AGCD)
Our method achieves state-of-the-art performance on both generic and fine-grained datasets.
arXiv Detail & Related papers (2024-03-07T07:12:24Z) - DiffusePast: Diffusion-based Generative Replay for Class Incremental
Semantic Segmentation [73.54038780856554]
Class Incremental Semantic (CISS) extends the traditional segmentation task by incrementally learning newly added classes.
Previous work has introduced generative replay, which involves replaying old class samples generated from a pre-trained GAN.
We propose DiffusePast, a novel framework featuring a diffusion-based generative replay module that generates semantically accurate images with more reliable masks guided by different instructions.
arXiv Detail & Related papers (2023-08-02T13:13:18Z) - Non-exemplar Class-incremental Learning by Random Auxiliary Classes
Augmentation and Mixed Features [37.51376572211081]
Non-exemplar class-incremental learning refers to classifying new and old classes without storing samples of old classes.
We propose an effective non-exemplar method called RAMF consisting of Random Auxiliary classes augmentation and Mixed Feature.
arXiv Detail & Related papers (2023-04-16T06:33:43Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - GistNet: a Geometric Structure Transfer Network for Long-Tailed
Recognition [95.93760490301395]
Long-tailed recognition is a problem where the number of examples per class is highly unbalanced.
GistNet is proposed to support this goal, using constellations of classifier parameters to encode the class geometry.
A new learning algorithm is then proposed for GeometrIc Structure Transfer (GIST), with resort to a combination of loss functions that combine class-balanced and random sampling to guarantee that, while overfitting to the popular classes is restricted to geometric parameters, it is leveraged to transfer class geometry from popular to few-shot classes.
arXiv Detail & Related papers (2021-05-01T00:37:42Z) - ClaRe: Practical Class Incremental Learning By Remembering Previous
Class Representations [9.530976792843495]
Class Incremental Learning (CIL) tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data.
ClaRe is an efficient solution for CIL by remembering the representations of learned classes in each increment.
ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes.
arXiv Detail & Related papers (2021-03-29T10:39:42Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Memory-Efficient Incremental Learning Through Feature Adaptation [71.1449769528535]
We introduce an approach for incremental learning that preserves feature descriptors of training images from previously learned classes.
Keeping the much lower-dimensional feature embeddings of images reduces the memory footprint significantly.
Experimental results show that our method achieves state-of-the-art classification accuracy in incremental learning benchmarks.
arXiv Detail & Related papers (2020-04-01T21:16:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.