Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain
Few-Shot Learning
- URL: http://arxiv.org/abs/2203.07656v1
- Date: Tue, 15 Mar 2022 05:36:41 GMT
- Title: Wave-SAN: Wavelet based Style Augmentation Network for Cross-Domain
Few-Shot Learning
- Authors: Yuqian Fu, Yu Xie, Yanwei Fu, Jingjing Chen, Yu-Gang Jiang
- Abstract summary: Cross-domain few-shot learning aims at transferring knowledge from general nature images to novel domain-specific target categories.
This paper studies the problem of CD-FSL by spanning the style distributions of the source dataset.
To make our model robust to visual styles, the source images are augmented by swapping the styles of their low-frequency components with each other.
- Score: 95.78635058475439
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous few-shot learning (FSL) works mostly are limited to natural images
of general concepts and categories. These works assume very high visual
similarity between the source and target classes. In contrast, the recently
proposed cross-domain few-shot learning (CD-FSL) aims at transferring knowledge
from general nature images of many labeled examples to novel domain-specific
target categories of only a few labeled examples. The key challenge of CD-FSL
lies in the huge data shift between source and target domains, which is
typically in the form of totally different visual styles. This makes it very
nontrivial to directly extend the classical FSL methods to address the CD-FSL
task. To this end, this paper studies the problem of CD-FSL by spanning the
style distributions of the source dataset. Particularly, wavelet transform is
introduced to enable the decomposition of visual representations into
low-frequency components such as shape and style and high-frequency components
e.g., texture. To make our model robust to visual styles, the source images are
augmented by swapping the styles of their low-frequency components with each
other. We propose a novel Style Augmentation (StyleAug) module to implement
this idea. Furthermore, we present a Self-Supervised Learning (SSL) module to
ensure the predictions of style-augmented images are semantically similar to
the unchanged ones. This avoids the potential semantic drift problem in
exchanging the styles. Extensive experiments on two CD-FSL benchmarks show the
effectiveness of our method. Our codes and models will be released.
Related papers
- MoreStyle: Relax Low-frequency Constraint of Fourier-based Image Reconstruction in Generalizable Medical Image Segmentation [53.24011398381715]
We introduce a Plug-and-Play module for data augmentation called MoreStyle.
MoreStyle diversifies image styles by relaxing low-frequency constraints in Fourier space.
With the help of adversarial learning, MoreStyle pinpoints the most intricate style combinations within latent features.
arXiv Detail & Related papers (2024-03-18T11:38:47Z) - ESPT: A Self-Supervised Episodic Spatial Pretext Task for Improving
Few-Shot Learning [16.859375666701]
We propose to augment the few-shot learning objective with a novel self-supervised Episodic Spatial Pretext Task (ESPT)
Our ESPT objective is defined as maximizing the local spatial relationship consistency between the original episode and the transformed one.
Our ESPT method achieves new state-of-the-art performance for few-shot image classification on three mainstay benchmark datasets.
arXiv Detail & Related papers (2023-04-26T04:52:08Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - StyleAdv: Meta Style Adversarial Training for Cross-Domain Few-Shot
Learning [89.86971464234533]
Cross-Domain Few-Shot Learning (CD-FSL) is a recently emerging task that tackles few-shot learning across different domains.
We propose a novel model-agnostic meta Style Adversarial training (StyleAdv) method together with a novel style adversarial attack method.
Our method is gradually robust to the visual styles, thus boosting the generalization ability for novel target datasets.
arXiv Detail & Related papers (2023-02-18T11:54:37Z) - Semantic Cross Attention for Few-shot Learning [9.529264466445236]
We propose a multi-task learning approach to view semantic features of label text as an auxiliary task.
Our proposed model uses word-embedding representations as semantic features to help train the embedding network and a semantic cross-attention module to bridge the semantic features into the typical visual modal.
arXiv Detail & Related papers (2022-10-12T15:24:59Z) - Semantic decoupled representation learning for remote sensing image
change detection [17.548248093344576]
We propose a semantic decoupled representation learning for RS image CD.
We disentangle representations of different semantic regions by leveraging the semantic mask.
We additionally force the model to distinguish different semantic representations, which benefits the recognition of objects of interest in the downstream CD task.
arXiv Detail & Related papers (2022-01-15T07:35:26Z) - Interventional Few-Shot Learning [88.31112565383457]
We propose a novel Few-Shot Learning paradigm: Interventional Few-Shot Learning.
Code is released at https://github.com/yue-zhongqi/ifsl.
arXiv Detail & Related papers (2020-09-28T01:16:54Z) - AdarGCN: Adaptive Aggregation GCN for Few-Shot Learning [112.95742995816367]
We propose a new few-shot fewshot learning setting termed FSFSL.
Under FSFSL, both the source and target classes have limited training samples.
We also propose a graph convolutional network (GCN)-based label denoising (LDN) method to remove irrelevant images.
arXiv Detail & Related papers (2020-02-28T10:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.