On the Mechanisms of Adversarial Data Augmentation for Robust and Adaptive Transfer Learning
- URL: http://arxiv.org/abs/2505.12681v1
- Date: Mon, 19 May 2025 03:56:51 GMT
- Title: On the Mechanisms of Adversarial Data Augmentation for Robust and Adaptive Transfer Learning
- Authors: Hana Satou, Alan Mitkiy,
- Abstract summary: We investigate the role of adversarial data augmentation (ADA) in enhancing both robustness and adaptivity in transfer learning settings.<n>We propose a unified framework that integrates ADA with consistency regularization and domain-invariant representation learning.<n>Our results highlight a constructive perspective of adversarial learning, transforming perturbation from a destructive attack into a regularizing force for cross-domain transferability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Transfer learning across domains with distribution shift remains a fundamental challenge in building robust and adaptable machine learning systems. While adversarial perturbations are traditionally viewed as threats that expose model vulnerabilities, recent studies suggest that they can also serve as constructive tools for data augmentation. In this work, we systematically investigate the role of adversarial data augmentation (ADA) in enhancing both robustness and adaptivity in transfer learning settings. We analyze how adversarial examples, when used strategically during training, improve domain generalization by enriching decision boundaries and reducing overfitting to source-domain-specific features. We further propose a unified framework that integrates ADA with consistency regularization and domain-invariant representation learning. Extensive experiments across multiple benchmark datasets -- including VisDA, DomainNet, and Office-Home -- demonstrate that our method consistently improves target-domain performance under both unsupervised and few-shot domain adaptation settings. Our results highlight a constructive perspective of adversarial learning, transforming perturbation from a destructive attack into a regularizing force for cross-domain transferability.
Related papers
- Self-Paced Collaborative and Adversarial Network for Unsupervised Domain Adaptation [74.27130400558013]
This paper proposes a new unsupervised domain adaptation approach called Collaborative and Adversarial Network (CAN)<n>CAN uses the domain-collaborative and domain-adversarial learning strategy for training the neural network.<n>To further enhance the discriminability in the target domain, we propose Self-Paced CAN (SPCAN)
arXiv Detail & Related papers (2025-06-24T02:58:37Z) - Feature Based Methods in Domain Adaptation for Object Detection: A Review Paper [0.6437284704257459]
Domain adaptation aims to enhance the performance of machine learning models when deployed in target domains with distinct data distributions.<n>This review delves into advanced methodologies for domain adaptation, including adversarial learning, discrepancy-based, multi-domain, teacher-student, ensemble, and Vision Language Models.<n>Special attention is given to strategies that minimize the reliance on extensive labeled data, particularly in scenarios involving synthetic-to-real domain shifts.
arXiv Detail & Related papers (2024-12-23T06:34:23Z) - Towards Full-scene Domain Generalization in Multi-agent Collaborative Bird's Eye View Segmentation for Connected and Autonomous Driving [49.03947018718156]
We propose a unified domain generalization framework to be utilized during the training and inference stages of collaborative perception.
We also introduce an intra-system domain alignment mechanism to reduce or potentially eliminate the domain discrepancy among connected and autonomous vehicles.
arXiv Detail & Related papers (2023-11-28T12:52:49Z) - Towards Subject Agnostic Affective Emotion Recognition [8.142798657174332]
EEG signals manifest subject instability in subject-agnostic affective Brain-computer interfaces (aBCIs)
We propose a novel framework, meta-learning based augmented domain adaptation for subject-agnostic aBCIs.
Our proposed approach is shown to be effective in experiments on a public aBICs dataset.
arXiv Detail & Related papers (2023-10-20T23:44:34Z) - Robust Unsupervised Domain Adaptation by Retaining Confident Entropy via
Edge Concatenation [7.953644697658355]
Unsupervised domain adaptation can mitigate the need for extensive pixel-level annotations to train semantic segmentation networks.
We introduce a novel approach to domain adaptation, leveraging the synergy of internal and external information within entropy-based adversarial networks.
We devised a probability-sharing network that integrates diverse information for more effective segmentation.
arXiv Detail & Related papers (2023-10-11T02:50:16Z) - Common Knowledge Learning for Generating Transferable Adversarial
Examples [60.1287733223249]
This paper focuses on an important type of black-box attacks, where the adversary generates adversarial examples by a substitute (source) model.
Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures.
We propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples.
arXiv Detail & Related papers (2023-07-01T09:07:12Z) - Variational Transfer Learning using Cross-Domain Latent Modulation [1.9662978733004601]
We introduce a novel cross-domain latent modulation mechanism to a variational autoencoder framework so as to achieve effective transfer learning.
Deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal.
The learned deep representations are then cross-modulated to the latent encoding of the alternative domain, where consistency constraints are also applied.
arXiv Detail & Related papers (2022-05-31T03:47:08Z) - Safe Self-Refinement for Transformer-based Domain Adaptation [73.8480218879]
Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain.
It is a challenging problem especially when a large domain gap lies between the source and target domains.
We propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects.
arXiv Detail & Related papers (2022-04-16T00:15:46Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Sequential Domain Adaptation through Elastic Weight Consolidation for
Sentiment Analysis [3.1473798197405944]
We propose a model-independent framework - Sequential Domain Adaptation (SDA)
Our experiments show that the proposed framework enables simple architectures such as CNNs to outperform complex state-of-the-art models in domain adaptation of sentiment analysis (SA)
In addition, we observe that the effectiveness of a harder first Anti-Curriculum ordering of source domains leads to maximum performance.
arXiv Detail & Related papers (2020-07-02T15:21:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.