Efficient Conditional GAN Transfer with Knowledge Propagation across
Classes
- URL: http://arxiv.org/abs/2102.06696v1
- Date: Fri, 12 Feb 2021 18:55:34 GMT
- Title: Efficient Conditional GAN Transfer with Knowledge Propagation across
Classes
- Authors: Mohamad Shahbazi, Zhiwu Huang, Danda Pani Paudel, Ajad Chhatkuli, Luc
Van Gool
- Abstract summary: CGANs provide new opportunities for knowledge transfer compared to unconditional setup.
New classes may borrow knowledge from the related old classes, or share knowledge among themselves to improve the training.
New GAN transfer method explicitly propagates the knowledge from the old classes to the new classes.
- Score: 85.38369543858516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial networks (GANs) have shown impressive results in both
unconditional and conditional image generation. In recent literature, it is
shown that pre-trained GANs, on a different dataset, can be transferred to
improve the image generation from a small target data. The same, however, has
not been well-studied in the case of conditional GANs (cGANs), which provides
new opportunities for knowledge transfer compared to unconditional setup. In
particular, the new classes may borrow knowledge from the related old classes,
or share knowledge among themselves to improve the training. This motivates us
to study the problem of efficient conditional GAN transfer with knowledge
propagation across classes. To address this problem, we introduce a new GAN
transfer method to explicitly propagate the knowledge from the old classes to
the new classes. The key idea is to enforce the popularly used conditional
batch normalization (BN) to learn the class-specific information of the new
classes from that of the old classes, with implicit knowledge sharing among the
new ones. This allows for an efficient knowledge propagation from the old
classes to the new classes, with the BN parameters increasing linearly with the
number of new classes. The extensive evaluation demonstrates the clear
superiority of the proposed method over state-of-the-art competitors for
efficient conditional GAN transfer tasks. The code will be available at:
https://github.com/mshahbazi72/cGANTransfer
Related papers
- Knowledge Adaptation Network for Few-Shot Class-Incremental Learning [23.90555521006653]
Few-shot class-incremental learning aims to incrementally recognize new classes using a few samples.
One of the effective methods to solve this challenge is to construct prototypical evolution classifiers.
Because representations for new classes are weak and biased, we argue such a strategy is suboptimal.
arXiv Detail & Related papers (2024-09-18T07:51:38Z) - Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning [16.318126586825734]
Incremental Learning (IL) aims to accumulate knowledge from sequential input tasks.
We consider a more challenging and realistic but under-explored IL scenario, named Versatile Incremental Learning (VIL)
We propose a simple yet effective IL framework, named Incremental with Shift cONtrol (ICON)
arXiv Detail & Related papers (2024-09-17T07:44:28Z) - Taming the Tail in Class-Conditional GANs: Knowledge Sharing via Unconditional Training at Lower Resolutions [10.946446480162148]
GANs tend to favor classes with more samples, leading to the generation of low-quality and less diverse samples in tail classes.
We propose a straightforward yet effective method for knowledge sharing, allowing tail classes to borrow from the rich information from classes with more abundant training data.
Experiments on several long-tail benchmarks and GAN architectures demonstrate a significant improvement over existing methods in both the diversity and fidelity of the generated images.
arXiv Detail & Related papers (2024-02-26T23:03:00Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer [65.42096702428347]
Graph Neural Networks (GNNs) aggregate information from neighboring nodes.
Knowledge Bridge Learning (KBL) learns a knowledge-enhanced posterior distribution for target domains.
Bridged-GNN includes an Adaptive Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge Transfer module.
arXiv Detail & Related papers (2023-08-18T12:14:51Z) - Mutual Information-guided Knowledge Transfer for Novel Class Discovery [23.772336970389834]
We propose a principle and general method to transfer semantic knowledge between seen and unseen classes.
Our results show that the proposed method outperforms previous SOTA by a significant margin on several benchmarks.
arXiv Detail & Related papers (2022-06-24T03:52:25Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Partial Is Better Than All: Revisiting Fine-tuning Strategy for Few-shot
Learning [76.98364915566292]
A common practice is to train a model on the base set first and then transfer to novel classes through fine-tuning.
We propose to transfer partial knowledge by freezing or fine-tuning particular layer(s) in the base model.
We conduct extensive experiments on CUB and mini-ImageNet to demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2021-02-08T03:27:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.