VITA: A Multi-Source Vicinal Transfer Augmentation Method for
Out-of-Distribution Generalization
- URL: http://arxiv.org/abs/2204.11531v1
- Date: Mon, 25 Apr 2022 09:47:51 GMT
- Title: VITA: A Multi-Source Vicinal Transfer Augmentation Method for
Out-of-Distribution Generalization
- Authors: Minghui Chen, Cheng Wen, Feng Zheng, Fengxiang He, Ling Shao
- Abstract summary: We propose a multi-source vicinal transfer augmentation (VITA) method for generating diverse on-manifold samples.
The proposed VITA consists of two complementary parts: tangent transfer and integration of multi-source vicinal samples.
Our proposed VITA significantly outperforms the current state-of-the-art augmentation methods, demonstrated in extensive experiments on corruption benchmarks.
- Score: 107.96139593283547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Invariance to diverse types of image corruption, such as noise, blurring, or
colour shifts, is essential to establish robust models in computer vision. Data
augmentation has been the major approach in improving the robustness against
common corruptions. However, the samples produced by popular augmentation
strategies deviate significantly from the underlying data manifold. As a
result, performance is skewed toward certain types of corruption. To address
this issue, we propose a multi-source vicinal transfer augmentation (VITA)
method for generating diverse on-manifold samples. The proposed VITA consists
of two complementary parts: tangent transfer and integration of multi-source
vicinal samples. The tangent transfer creates initial augmented samples for
improving corruption robustness. The integration employs a generative model to
characterize the underlying manifold built by vicinal samples, facilitating the
generation of on-manifold samples. Our proposed VITA significantly outperforms
the current state-of-the-art augmentation methods, demonstrated in extensive
experiments on corruption benchmarks.
Related papers
- Controlling the Fidelity and Diversity of Deep Generative Models via Pseudo Density [70.14884528360199]
We introduce an approach to bias deep generative models, such as GANs and diffusion models, towards generating data with enhanced fidelity or increased diversity.
Our approach involves manipulating the distribution of training and generated data through a novel metric for individual samples, named pseudo density.
arXiv Detail & Related papers (2024-07-11T16:46:04Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - MTS-DVGAN: Anomaly Detection in Cyber-Physical Systems using a Dual
Variational Generative Adversarial Network [7.889342625283858]
Deep generative models are promising in detecting novel cyber-physical attacks, mitigating the vulnerability of Cyber-physical systems (CPSs) without relying on labeled information.
This article proposes a novel unsupervised dual variational generative adversarial model named MST-DVGAN.
The central concept is to enhance the model's discriminative capability by widening the distinction between reconstructed abnormal samples and their normal counterparts.
arXiv Detail & Related papers (2023-11-04T11:19:03Z) - Exploring the Robustness of Human Parsers Towards Common Corruptions [99.89886010550836]
We construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models.
Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions.
arXiv Detail & Related papers (2023-09-02T13:32:14Z) - Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
and Controllability [62.105715985563656]
We propose a novel framework dubbed Diffusion-Based Projected Gradient Descent (Diff-PGD) for generating realistic adversarial samples.
Our framework can be easily customized for specific tasks such as digital attacks, physical-world attacks, and style-based attacks.
arXiv Detail & Related papers (2023-05-25T21:51:23Z) - Making Substitute Models More Bayesian Can Enhance Transferability of
Adversarial Examples [89.85593878754571]
transferability of adversarial examples across deep neural networks is the crux of many black-box attacks.
We advocate to attack a Bayesian model for achieving desirable transferability.
Our method outperforms recent state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-02-10T07:08:13Z) - Few-shot Image Generation with Diffusion Models [18.532357455856836]
Denoising diffusion probabilistic models (DDPMs) have been proven capable of synthesizing high-quality images with remarkable diversity when trained on large amounts of data.
Modern approaches are mainly built on Generative Adversarial Networks (GANs) and adapt models pre-trained on large source domains to target domains using a few available samples.
In this paper, we make the first attempt to study when do DDPMs overfit and suffer severe diversity degradation as training data become scarce.
arXiv Detail & Related papers (2022-11-07T02:18:27Z) - A Closer Look at Few-shot Image Generation [38.83570296616384]
When transferring pretrained GANs on small target data, the generator tends to replicate the training samples.
Several methods have been proposed to address this few-shot image generation, but there is a lack of effort to analyze them under a unified framework.
We propose a framework to analyze existing methods during the adaptation.
Second contribution proposes to apply mutual information (MI) to retain the source domain's rich multi-level diversity information in the target domain generator.
arXiv Detail & Related papers (2022-05-08T07:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.