Conservative Generator, Progressive Discriminator: Coordination of
Adversaries in Few-shot Incremental Image Synthesis
- URL: http://arxiv.org/abs/2207.14491v1
- Date: Fri, 29 Jul 2022 06:00:29 GMT
- Title: Conservative Generator, Progressive Discriminator: Coordination of
Adversaries in Few-shot Incremental Image Synthesis
- Authors: Chaerin Kong and Nojun Kwak
- Abstract summary: We study the underrepresented task of generative incremental few-shot learning.
We propose a novel framework named ConPro that leverages the two-player nature of GANs.
We present experiments to validate the effectiveness of ConPro.
- Score: 34.27851973031995
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The capacity to learn incrementally from an online stream of data is an
envied trait of human learners, as deep neural networks typically suffer from
catastrophic forgetting and stability-plasticity dilemma. Several works have
previously explored incremental few-shot learning, a task with greater
challenges due to data constraint, mostly in classification setting with mild
success. In this work, we study the underrepresented task of generative
incremental few-shot learning. To effectively handle the inherent challenges of
incremental learning and few-shot learning, we propose a novel framework named
ConPro that leverages the two-player nature of GANs. Specifically, we design a
conservative generator that preserves past knowledge in parameter and compute
efficient manner, and a progressive discriminator that learns to reason
semantic distances between past and present task samples, minimizing
overfitting with few data points and pursuing good forward transfer. We present
experiments to validate the effectiveness of ConPro.
Related papers
- Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class
Incremental Learning [64.14254712331116]
Non-exemplar class incremental learning aims to learn both the new and old tasks without accessing any training data from the past.
We propose a novel framework of fine-grained knowledge selection and restoration.
arXiv Detail & Related papers (2023-12-20T02:34:11Z) - Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks [9.82510084910641]
We propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts.
We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet.
arXiv Detail & Related papers (2023-11-02T22:00:23Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Prototype-Sample Relation Distillation: Towards Replay-Free Continual
Learning [14.462797749666992]
We propose a holistic approach to jointly learn the representation and class prototypes.
We propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data.
This method yields state-of-the-art performance in the task-incremental setting.
arXiv Detail & Related papers (2023-03-26T16:35:45Z) - Learning Bayesian Sparse Networks with Full Experience Replay for
Continual Learning [54.7584721943286]
Continual Learning (CL) methods aim to enable machine learning models to learn new tasks without catastrophic forgetting of those that have been previously mastered.
Existing CL approaches often keep a buffer of previously-seen samples, perform knowledge distillation, or use regularization techniques towards this goal.
We propose to only activate and select sparse neurons for learning current and past tasks at any stage.
arXiv Detail & Related papers (2022-02-21T13:25:03Z) - Adversarial Imitation Learning with Trajectorial Augmentation and
Correction [61.924411952657756]
We introduce a novel augmentation method which preserves the success of the augmented trajectories.
We develop an adversarial data augmented imitation architecture to train an imitation agent using synthetic experts.
Experiments show that our data augmentation strategy can improve accuracy and convergence time of adversarial imitation.
arXiv Detail & Related papers (2021-03-25T14:49:32Z) - ErGAN: Generative Adversarial Networks for Entity Resolution [8.576633582363202]
A major challenge in learning-based entity resolution is how to reduce the label cost for training.
We propose a novel deep learning method, called ErGAN, to address the challenge.
We have conducted extensive experiments to empirically verify the labeling and learning efficiency of ErGAN.
arXiv Detail & Related papers (2020-12-18T01:33:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.