Match What Matters: Generative Implicit Feature Replay for Continual
Learning
- URL: http://arxiv.org/abs/2106.05350v1
- Date: Wed, 9 Jun 2021 19:29:41 GMT
- Title: Match What Matters: Generative Implicit Feature Replay for Continual
Learning
- Authors: Kevin Thandiackal (1 and 2), Tiziano Portenier (2), Andrea Giovannini
(1), Maria Gabrani (1), Orcun Goksel (2 and 3) ((1) IBM Research Europe, (2)
ETH Zurich, (3) Uppsala University)
- Abstract summary: We propose GenIFeR (Generative Implicit Feature Replay) for class-incremental learning.
The main idea is to train a generative adversarial network (GAN) to generate images that contain realistic features.
We empirically show that GenIFeR is superior to both conventional generative image and feature replay.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural networks are prone to catastrophic forgetting when trained
incrementally on different tasks. In order to prevent forgetting, most existing
methods retain a small subset of previously seen samples, which in turn can be
used for joint training with new tasks. While this is indeed effective, it may
not always be possible to store such samples, e.g., due to data protection
regulations. In these cases, one can instead employ generative models to create
artificial samples or features representing memories from previous tasks.
Following a similar direction, we propose GenIFeR (Generative Implicit Feature
Replay) for class-incremental learning. The main idea is to train a generative
adversarial network (GAN) to generate images that contain realistic features.
While the generator creates images at full resolution, the discriminator only
sees the corresponding features extracted by the continually trained
classifier. Since the classifier compresses raw images into features that are
actually relevant for classification, the GAN can match this target
distribution more accurately. On the other hand, allowing the generator to
create full resolution images has several benefits: In contrast to previous
approaches, the feature extractor of the classifier does not have to be frozen.
In addition, we can employ augmentations on generated images, which not only
boosts classification performance, but also mitigates discriminator overfitting
during GAN training. We empirically show that GenIFeR is superior to both
conventional generative image and feature replay. In particular, we
significantly outperform the state-of-the-art in generative replay for various
settings on the CIFAR-100 and CUB-200 datasets.
Related papers
- Data-Independent Operator: A Training-Free Artifact Representation
Extractor for Generalizable Deepfake Detection [105.9932053078449]
In this work, we show that, on the contrary, the small and training-free filter is sufficient to capture more general artifact representations.
Due to its unbias towards both the training and test sources, we define it as Data-Independent Operator (DIO) to achieve appealing improvements on unseen sources.
Our detector achieves a remarkable improvement of $13.3%$, establishing a new state-of-the-art performance.
arXiv Detail & Related papers (2024-03-11T15:22:28Z) - Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models [68.73086826874733]
We introduce a novel Referring Diffusional segmentor (Ref-Diff) for referring image segmentation.
We demonstrate that without a proposal generator, a generative model alone can achieve comparable performance to existing SOTA weakly-supervised models.
This indicates that generative models are also beneficial for this task and can complement discriminative models for better referring segmentation.
arXiv Detail & Related papers (2023-08-31T14:55:30Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Assessing Neural Network Robustness via Adversarial Pivotal Tuning [24.329515700515806]
We show how a pretrained image generator can be used to semantically manipulate images in a detailed, diverse, and inversion way.
Inspired by recent GAN-based photorealistic methods, we propose a method called Adversarial Pivotal Tuning (APT)
We demonstrate that APT is capable of a wide range of class-preserving semantic image manipulations that fool a variety of pretrained classifiers.
arXiv Detail & Related papers (2022-11-17T18:54:35Z) - Reinforcing Generated Images via Meta-learning for One-Shot Fine-Grained
Visual Recognition [36.02360322125622]
We propose a meta-learning framework to combine generated images with original images, so that the resulting "hybrid" training images improve one-shot learning.
Our experiments demonstrate consistent improvement over baselines on one-shot fine-grained image classification benchmarks.
arXiv Detail & Related papers (2022-04-22T13:11:05Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z) - Generative Feature Replay For Class-Incremental Learning [46.88667212214957]
We consider a class-incremental setting which means that the task-ID is unknown at inference time.
The imbalance between old and new classes typically results in a bias of the network towards the newest ones.
We propose a solution based on generative feature replay which does not require any exemplars.
arXiv Detail & Related papers (2020-04-20T10:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.