Abstract: Neural networks are prone to catastrophic forgetting when trained
incrementally on different tasks. In order to prevent forgetting, most existing
methods retain a small subset of previously seen samples, which in turn can be
used for joint training with new tasks. While this is indeed effective, it may
not always be possible to store such samples, e.g., due to data protection
regulations. In these cases, one can instead employ generative models to create
artificial samples or features representing memories from previous tasks.
Following a similar direction, we propose GenIFeR (Generative Implicit Feature
Replay) for class-incremental learning. The main idea is to train a generative
adversarial network (GAN) to generate images that contain realistic features.
While the generator creates images at full resolution, the discriminator only
sees the corresponding features extracted by the continually trained
classifier. Since the classifier compresses raw images into features that are
actually relevant for classification, the GAN can match this target
distribution more accurately. On the other hand, allowing the generator to
create full resolution images has several benefits: In contrast to previous
approaches, the feature extractor of the classifier does not have to be frozen.
In addition, we can employ augmentations on generated images, which not only
boosts classification performance, but also mitigates discriminator overfitting
during GAN training. We empirically show that GenIFeR is superior to both
conventional generative image and feature replay. In particular, we
significantly outperform the state-of-the-art in generative replay for various
settings on the CIFAR-100 and CUB-200 datasets.