PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning
Under the Support-Query Shift
- URL: http://arxiv.org/abs/2205.03817v1
- Date: Sun, 8 May 2022 09:15:58 GMT
- Title: PGADA: Perturbation-Guided Adversarial Alignment for Few-shot Learning
Under the Support-Query Shift
- Authors: Siyang Jiang, Wei Ding, Hsi-Wen Chen, Ming-Syan Chen
- Abstract summary: Few-shot learning methods aim to embed the data to a low-dimensional embedding space and then classify the unseen query data to the seen support set.
We find that the small perturbations in the images would significantly misguide the optimal transportation and thus degrade the model performance.
To relieve the misalignment, we first propose a novel adversarial data augmentation method, namely Perturbation-Guided Adrial Alignment (PGADA)
- Score: 10.730615481992515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning methods aim to embed the data to a low-dimensional
embedding space and then classify the unseen query data to the seen support
set. While these works assume that the support set and the query set lie in the
same embedding space, a distribution shift usually occurs between the support
set and the query set, i.e., the Support-Query Shift, in the real world. Though
optimal transportation has shown convincing results in aligning different
distributions, we find that the small perturbations in the images would
significantly misguide the optimal transportation and thus degrade the model
performance. To relieve the misalignment, we first propose a novel adversarial
data augmentation method, namely Perturbation-Guided Adversarial Alignment
(PGADA), which generates the hard examples in a self-supervised manner. In
addition, we introduce Regularized Optimal Transportation to derive a smooth
optimal transportation plan. Extensive experiments on three benchmark datasets
manifest that our framework significantly outperforms the eleven
state-of-the-art methods on three datasets.
Related papers
- OPUS: Occupancy Prediction Using a Sparse Set [64.60854562502523]
We present a framework to simultaneously predict occupied locations and classes using a set of learnable queries.
OPUS incorporates a suite of non-trivial strategies to enhance model performance.
Our lightest model achieves superior RayIoU on the Occ3D-nuScenes dataset at near 2x FPS, while our heaviest model surpasses previous best results by 6.1 RayIoU.
arXiv Detail & Related papers (2024-09-14T07:44:22Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - Dual Adversarial Alignment for Realistic Support-Query Shift Few-shot
Learning [15.828113109152069]
Support-Query Shift Few-shot learning aims to classify unseen examples (query set) to labeled data (support set) based on the learned embedding in a low-dimensional space.
In this paper, we propose a novel but more difficult challenge, Realistic Support-Query Shift few-shot learning.
In addition, we propose a unified adversarial feature alignment method called DUal adversarial ALignment framework (DuaL) to relieve RSQS from two aspects, i.e., inter-domain bias and intra-domain variance.
arXiv Detail & Related papers (2023-09-05T09:50:31Z) - MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation
Segmentation [98.09845149258972]
We introduce active sample selection to assist domain adaptation regarding the semantic segmentation task.
With only a little workload to manually annotate these samples, the distortion of the target-domain distribution can be effectively alleviated.
A powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem.
arXiv Detail & Related papers (2023-01-18T07:55:22Z) - Surgical Fine-Tuning Improves Adaptation to Distribution Shifts [114.17184775397067]
A common approach to transfer learning under distribution shift is to fine-tune the last few layers of a pre-trained model.
This paper shows that in such settings, selectively fine-tuning a subset of layers matches or outperforms commonly used fine-tuning approaches.
arXiv Detail & Related papers (2022-10-20T17:59:15Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - Con$^{2}$DA: Simplifying Semi-supervised Domain Adaptation by Learning
Consistent and Contrastive Feature Representations [1.2891210250935146]
Con$2$DA is a framework that extends recent advances in semi-supervised learning to the semi-supervised domain adaptation problem.
Our framework generates pairs of associated samples by performing data transformations to a given input.
We use different loss functions to enforce consistency between the feature representations of associated data pairs of samples.
arXiv Detail & Related papers (2022-04-04T15:05:45Z) - Learning Under Adversarial and Interventional Shifts [36.183840774167756]
We propose a new formulation, RISe, for designing robust models against a set of distribution shifts.
We employ the distributionally robust optimization framework to optimize the resulting objective in both supervised and reinforcement learning settings.
arXiv Detail & Related papers (2021-03-29T20:10:51Z) - Stochastic Adversarial Gradient Embedding for Active Domain Adaptation [4.514832807541817]
Unlabelled Domain Adaptation (UDA) aims to bridge the gap between a source domain, where labelled data are available, and a target domain only represented with unsupervised data.
This paper addresses this problem by using active learning to annotate a small budget of target data.
We introduce textitStochastic Adversarial Gradient Embedding (SAGE), a framework that makes a triple contribution to ADA.
arXiv Detail & Related papers (2020-12-03T11:28:32Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Representation Learning via Adversarially-Contrastive Optimal Transport [40.52344027750609]
We set the problem within the context of contrastive representation learning.
We propose a framework connecting Wasserstein GANs with a novel classifier.
Our results demonstrate competitive performance against challenging baselines.
arXiv Detail & Related papers (2020-07-11T19:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.