StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning
- URL: http://arxiv.org/abs/2302.09309v2
- Date: Mon, 8 May 2023 11:52:31 GMT
- Title: StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning
- Authors: Yuqian Fu, Yu Xie, Yanwei Fu, Yu-Gang Jiang
- Abstract summary: Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
- Score: 89.86971464234533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that
tackles few-shot learning across different domains. It aims at transferring
prior knowledge learned on the source dataset to novel target datasets. The
CD-FSL task is especially challenged by the huge domain gap between different
datasets. Critically, such a domain gap actually comes from the changes of
visual styles, and wave-SAN empirically shows that spanning the style
distribution of the source data helps alleviate this issue. However, wave-SAN
simply swaps styles of two images. Such a vanilla operation makes the generated
styles ``real'' and ``easy'', which still fall into the original set of the
source styles. Thus, inspired by vanilla adversarial learning, a novel
model-agnostic meta Style Adversarial training (StyleAdv) method together with
a novel style adversarial attack method is proposed for CD-FSL. Particularly,
our style attack method synthesizes both ``virtual'' and ``hard'' adversarial
styles for model training. This is achieved by perturbing the original style
with the signed style gradients. By continually attacking styles and forcing
the model to recognize these challenging adversarial styles, our model is
gradually robust to the visual styles, thus boosting the generalization ability
for novel target datasets. Besides the typical CNN-based backbone, we also
employ our StyleAdv method on large-scale pretrained vision transformer.
Extensive experiments conducted on eight various target datasets show the
effectiveness of our method. Whether built upon ResNet or ViT, we achieve the
new state of the art for CD-FSL. Code is available at
https://github.com/lovelyqian/StyleAdv-CDFSL.
Related papers
- FISTNet: FusIon of STyle-path generative Networks for Facial Style Transfer [15.308837341075135]
StyleGAN methods have the tendency of overfitting that results in the introduction of artifacts in the facial images.
We propose a FusIon of STyles (FIST) network for facial images that leverages pre-trained multipath style transfer networks.
arXiv Detail & Related papers (2023-07-18T07:20:31Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Style-Agnostic Reinforcement Learning [9.338454092492901]
We present a novel method of learning style-agnostic representation using both style transfer and adversarial learning.
Our method trains the actor with diverse image styles generated from an inherent adversarial style generator.
We verify that our method achieves competitive or better performances than the state-of-the-art approaches on Procgen and Distracting Control Suite benchmarks.
arXiv Detail & Related papers (2022-08-31T13:45:00Z) - Adversarial Style Augmentation for Domain Generalized Urban-Scene
Segmentation [120.96012935286913]
We propose a novel adversarial style augmentation approach, which can generate hard stylized images during training.
Experiments on two synthetic-to-real semantic segmentation benchmarks demonstrate that AdvStyle can significantly improve the model performance on unseen real domains.
arXiv Detail & Related papers (2022-07-11T14:01:25Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain
Few-Shot Learning [95.78635058475439]
Cross-domain few-shot learning aims at transferring knowledge from general nature images to novel domain-specific target categories.
This paper studies the problem of CD-FSL by spanning the style distributions of the source dataset.
To make our model robust to visual styles, the source images are augmented by swapping the styles of their low-frequency components with each other.
arXiv Detail & Related papers (2022-03-15T05:36:41Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.