Geometrically Regularized Transfer Learning with On-Manifold and Off-Manifold Perturbation
- URL: http://arxiv.org/abs/2505.15191v1
- Date: Wed, 21 May 2025 07:13:09 GMT
- Title: Geometrically Regularized Transfer Learning with On-Manifold and Off-Manifold Perturbation
- Authors: Hana Satou, Alan Mitkiy, F Monkey,
- Abstract summary: MAADA is a novel framework that decomposes adversarial perturbations into on-manifold and off-manifold components.<n>We show that MAADA consistently outperforms existing adversarial and adaptation methods in both unsupervised and few-shot settings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Transfer learning under domain shift remains a fundamental challenge due to the divergence between source and target data manifolds. In this paper, we propose MAADA (Manifold-Aware Adversarial Data Augmentation), a novel framework that decomposes adversarial perturbations into on-manifold and off-manifold components to simultaneously capture semantic variation and model brittleness. We theoretically demonstrate that enforcing on-manifold consistency reduces hypothesis complexity and improves generalization, while off-manifold regularization smooths decision boundaries in low-density regions. Moreover, we introduce a geometry-aware alignment loss that minimizes geodesic discrepancy between source and target manifolds. Experiments on DomainNet, VisDA, and Office-Home show that MAADA consistently outperforms existing adversarial and adaptation methods in both unsupervised and few-shot settings, demonstrating superior structural robustness and cross-domain generalization.
Related papers
- Sparse Causal Discovery with Generative Intervention for Unsupervised Graph Domain Adaptation [27.5393760658806]
Unsupervised Graph Domain Adaptation (UGDA) leverages labeled source domain graphs to achieve effective performance in unlabeled target domains despite distribution shifts.<n>We propose SLOGAN, a novel approach that achieves stable graph representation transfer through sparse causal modeling and dynamic intervention mechanisms.
arXiv Detail & Related papers (2025-07-10T10:42:21Z) - General and Estimable Learning Bound Unifying Covariate and Concept Shifts [1.1077154107564848]
We bridge the gap between theory and practical applications by developing a novel unified error bound that applies to broad loss functions, label spaces, and labeling.<n>We also develop an algorithm that can quantify the error bound in most distribution shifts -- a rigorous and general tool for analyzing learning error under distribution shift.
arXiv Detail & Related papers (2025-06-15T12:18:05Z) - GAMA++: Disentangled Geometric Alignment with Adaptive Contrastive Perturbation for Reliable Domain Transfer [0.0]
GAMA++ is a novel framework that introduces (i) latent space disentanglement to isolate label-consistent manifold directions from nuisance factors, and (ii) an adaptive contrastive perturbation strategy that tailors both on- and off-manifold exploration to class-specific manifold curvature and alignment discrepancy.<n>Our method achieves state-of-the-art results on DomainNet, Office-Home, and VisDA benchmarks under both standard and few-shot settings, with notable improvements in class-level alignment fidelity and boundary robustness.
arXiv Detail & Related papers (2025-05-21T08:16:35Z) - GAMA: Geometry-Aware Manifold Alignment via Structured Adversarial Perturbations for Robust Domain Adaptation [0.0]
GAMA is a structured framework that achieves explicit manifold alignment via adversarial perturbation guided by geometric information.<n>GAMA tightens the generalization bound via structured regularization and explicit alignment.<n> Empirical results on DomainNet, VisDA, and Office-Home demonstrate that GAMA consistently outperforms existing adversarial and adaptation methods.
arXiv Detail & Related papers (2025-05-21T07:16:42Z) - Continuous Domain Generalization [20.41728538658197]
This paper introduces the task of Continuous Domain Generalization (CDG), which aims to generalize predictive models to unseen domains.<n>We present a principled framework grounded in geometric and algebraic theory, showing that optimal model parameters across domains lie on a low-dimensional manifold.<n>Experiments on synthetic and real-world datasets-including remote sensing, scientific documents, and traffic forecasting-demonstrate that our method significantly outperforms existing baselines in generalization accuracy and robustness under descriptor imperfections.
arXiv Detail & Related papers (2025-05-17T12:39:45Z) - From Deterministic to Probabilistic: A Novel Perspective on Domain Generalization for Medical Image Segmentation [1.93061220186624]
We propose an innovative framework that enhances data representation quality through probabilistic modeling and contrastive learning.<n>Specifically, we combine deterministic features with uncertainty modeling to capture comprehensive feature distributions.<n>We show that the proposed framework significantly improves segmentation performance, providing a robust solution to domain generalization challenges in medical image segmentation.
arXiv Detail & Related papers (2024-12-07T07:41:04Z) - PseudoNeg-MAE: Self-Supervised Point Cloud Learning using Conditional Pseudo-Negative Embeddings [55.55445978692678]
PseudoNeg-MAE enhances global feature representation of point cloud masked autoencoders by making them both discriminative and sensitive to transformations.<n>We propose a novel loss that explicitly penalizes invariant collapse, enabling the network to capture richer transformation cues while preserving discriminative representations.
arXiv Detail & Related papers (2024-09-24T07:57:21Z) - Toward Certified Robustness Against Real-World Distribution Shifts [65.66374339500025]
We train a generative model to learn perturbations from data and define specifications with respect to the output of the learned model.
A unique challenge arising from this setting is that existing verifiers cannot tightly approximate sigmoid activations.
We propose a general meta-algorithm for handling sigmoid activations which leverages classical notions of counter-example-guided abstraction refinement.
arXiv Detail & Related papers (2022-06-08T04:09:13Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Mapping conditional distributions for domain adaptation under
generalized target shift [0.0]
We consider the problem of unsupervised domain adaptation (UDA) between a source and a target domain under conditional and label shift a.k.a Generalized Target Shift (GeTarS)
Recent approaches learn domain-invariant representations, yet they have practical limitations and rely on strong assumptions that may not hold in practice.
In this paper, we explore a novel and general approach to align pretrained representations, which circumvents existing drawbacks.
arXiv Detail & Related papers (2021-10-26T14:25:07Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - MMCGAN: Generative Adversarial Network with Explicit Manifold Prior [78.58159882218378]
We propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN.
Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.
arXiv Detail & Related papers (2020-06-18T07:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.