Rethinking the Flow-Based Gradual Domain Adaption: A Semi-Dual Optimal Transport Perspective
- URL: http://arxiv.org/abs/2602.01179v1
- Date: Sun, 01 Feb 2026 12:07:54 GMT
- Title: Rethinking the Flow-Based Gradual Domain Adaption: A Semi-Dual Optimal Transport Perspective
- Authors: Zhichao Chen, Zhan Zhuang, Yunfei Teng, Hao Wang, Fangyikang Wang, Zhengnan Li, Tianqiao Liu, Haoxuan Li, Zhouchen Lin,
- Abstract summary: We propose an Entropy-regularized Semi-dual Unbalanced Optimal Transport (E-SUOT) framework to construct intermediate domains.<n>Specifically, we reformulate flow-based GDA as a Lagrangian dual problem and derive an equivalent semi-dual objective that circumvents the need for likelihood estimation.
- Score: 47.7700613919136
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gradual domain adaptation (GDA) aims to mitigate domain shift by progressively adapting models from the source domain to the target domain via intermediate domains. However, real intermediate domains are often unavailable or ineffective, necessitating the synthesis of intermediate samples. Flow-based models have recently been used for this purpose by interpolating between source and target distributions; however, their training typically relies on sample-based log-likelihood estimation, which can discard useful information and thus degrade GDA performance. The key to addressing this limitation is constructing the intermediate domains via samples directly. To this end, we propose an Entropy-regularized Semi-dual Unbalanced Optimal Transport (E-SUOT) framework to construct intermediate domains. Specifically, we reformulate flow-based GDA as a Lagrangian dual problem and derive an equivalent semi-dual objective that circumvents the need for likelihood estimation. However, the dual problem leads to an unstable min-max training procedure. To alleviate this issue, we further introduce entropy regularization to convert it into a more stable alternative optimization procedure. Based on this, we propose a novel GDA training framework and provide theoretical analysis in terms of stability and generalization. Finally, extensive experiments are conducted to demonstrate the efficacy of the E-SUOT framework.
Related papers
- Variance Matters: Improving Domain Adaptation via Stratified Sampling [1.7188280334580195]
This paper proposes Variance-Reduced Adaptation via Domain Stratified Sampling (VaRDASS)<n>VaRDASS is the first specialised variance reduction technique for UDA.<n> Experiments on three domain shift datasets improved discrepancy estimation accuracy and target domain performance.
arXiv Detail & Related papers (2025-12-04T20:01:04Z) - Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization [13.021311628351423]
Domain generalization (DG) seeks to develop models that generalize well to unseen target domains.<n>One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization.<n>We introduce textbfClosed-Form textbfMoment textbfAlignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form.
arXiv Detail & Related papers (2025-06-09T02:51:36Z) - GDO:Gradual Domain Osmosis [5.620015188968398]
We propose a new method called Gradual Domain Osmosis, which aims to solve the problem of smooth knowledge migration from source domain to target domain in Gradual Domain Adaptation (GDA)<n>Traditional Gradual Domain Adaptation methods mitigate domain bias by introducing intermediate domains and self-training strategies, but often face the challenges of inefficient knowledge migration or missing data in intermediate domains.
arXiv Detail & Related papers (2025-01-31T14:25:45Z) - Gradual Domain Adaptation: Theory and Algorithms [13.985514966000189]
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.<n>In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound.<n>We propose $textbfG$enerative Gradual D$textbfO$main $textbfA$daptation with Optimal $textbfT$ransport (GOAT)
arXiv Detail & Related papers (2023-10-20T23:02:08Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.