Adversarial Data Augmentation for Single Domain Generalization via Lyapunov Exponent-Guided Optimization
- URL: http://arxiv.org/abs/2507.04302v1
- Date: Sun, 06 Jul 2025 09:03:08 GMT
- Title: Adversarial Data Augmentation for Single Domain Generalization via Lyapunov Exponent-Guided Optimization
- Authors: Zuyu Zhang, Ning Chen, Yongshan Liu, Qinghua Zhang, Xu Zhang,
- Abstract summary: Single Domain Generalization aims to develop models capable of generalizing to unseen target domains using only one source domain.<n>We propose LEAwareSGD, a novel Lyapunov Exponent (LE)-guided optimization approach inspired by dynamical systems theory.<n>Experiments on PACS, OfficeHome, and DomainNet demonstrate that LEAwareSGD yields substantial generalization gains.
- Score: 6.619253289031494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single Domain Generalization (SDG) aims to develop models capable of generalizing to unseen target domains using only one source domain, a task complicated by substantial domain shifts and limited data diversity. Existing SDG approaches primarily rely on data augmentation techniques, which struggle to effectively adapt training dynamics to accommodate large domain shifts. To address this, we propose LEAwareSGD, a novel Lyapunov Exponent (LE)-guided optimization approach inspired by dynamical systems theory. By leveraging LE measurements to modulate the learning rate, LEAwareSGD encourages model training near the edge of chaos, a critical state that optimally balances stability and adaptability. This dynamic adjustment allows the model to explore a wider parameter space and capture more generalizable features, ultimately enhancing the model's generalization capability. Extensive experiments on PACS, OfficeHome, and DomainNet demonstrate that LEAwareSGD yields substantial generalization gains, achieving up to 9.47\% improvement on PACS in low-data regimes. These results underscore the effectiveness of training near the edge of chaos for enhancing model generalization capability in SDG tasks.
Related papers
- Exploring Probabilistic Modeling Beyond Domain Generalization for Semantic Segmentation [37.724608645202466]
Domain Generalized Semantic (DGSS) is a critical yet challenging task, as domain shifts in unseen environments can severely compromise model performance.<n>This paper introduces PDAF, a Probabilistic Diffusion Alignment Framework that enhances the generalization of existing segmentation networks.<n>Experiments validate the effectiveness of PDAF across diverse and challenging urban scenes.
arXiv Detail & Related papers (2025-07-28T22:27:58Z) - Learning Time-Aware Causal Representation for Model Generalization in Evolving Domains [50.66049136093248]
We develop a time-aware structural causal model (SCM) that incorporates dynamic causal factors and the causal mechanism drifts.<n>We show that our method can yield the optimal causal predictor for each time domain.<n>Results on both synthetic and real-world datasets exhibit that SYNC can achieve superior temporal generalization performance.
arXiv Detail & Related papers (2025-06-21T14:05:37Z) - DISCO Balances the Scales: Adaptive Domain- and Difficulty-Aware Reinforcement Learning on Imbalanced Data [29.06340707914799]
We propose a principled extension to GRPO that addresses inter-group imbalance with two key innovations.<n> Domain-aware reward scaling counteracts frequency bias by reweighting optimization based on domain prevalence.<n>Difficulty-aware reward scaling leverages prompt-level self-consistency to identify and prioritize uncertain prompts that offer greater learning value.
arXiv Detail & Related papers (2025-05-21T03:43:29Z) - PEER pressure: Model-to-Model Regularization for Single Source Domain Generalization [12.15086255236961]
We show that the performance of such augmentation-based methods in the target domains universally fluctuates during training.<n>We propose a novel generalization method, coined.<n>Space Ensemble with Entropy Regularization (PEER), that uses a proxy model to learn the augmented data.
arXiv Detail & Related papers (2025-05-19T06:01:11Z) - CLIP-Powered Domain Generalization and Domain Adaptation: A Comprehensive Survey [38.281260447611395]
This survey systematically explores the applications of Contrastive Language-Image Pretraining (CLIP) in domain generalization (DG) and domain adaptation (DA)<n>CLIP offers powerful zero-shot capabilities that allow models to perform effectively in unseen domains.<n>Key challenges, including overfitting, domain diversity, and computational efficiency, are addressed.
arXiv Detail & Related papers (2025-04-19T12:27:24Z) - Let Synthetic Data Shine: Domain Reassembly and Soft-Fusion for Single Domain Generalization [68.41367635546183]
Single Domain Generalization aims to train models with consistent performance across diverse scenarios using data from a single source.<n>We propose Discriminative Domain Reassembly and Soft-Fusion (DRSF), a training framework leveraging synthetic data to improve model generalization.
arXiv Detail & Related papers (2025-03-17T18:08:03Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Amortized Prompt: Lightweight Fine-Tuning for CLIP in Domain
Generalization [25.367775241988618]
Domain generalization is a difficult transfer learning problem aiming to learn a generalizable model to unseen domains.
Recent massive pre-trained models such as CLIP and GPT-3 have been shown to be robust to many distribution shifts.
We propose AP (Amortized Prompt) as a novel approach for domain inference in the form of prompt generation.
arXiv Detail & Related papers (2021-11-25T00:25:54Z) - Contrastive Syn-to-Real Generalization [125.54991489017854]
We make a key observation that the diversity of the learned feature embeddings plays an important role in the generalization performance.
We propose contrastive synthetic-to-real generalization (CSG), a novel framework that leverages the pre-trained ImageNet knowledge to prevent overfitting to the synthetic domain.
We demonstrate the effectiveness of CSG on various synthetic training tasks, exhibiting state-of-the-art performance on zero-shot domain generalization.
arXiv Detail & Related papers (2021-04-06T05:10:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.