Riemannian Flow Matching for Disentangled Graph Domain Adaptation
- URL: http://arxiv.org/abs/2602.00656v1
- Date: Sat, 31 Jan 2026 11:05:35 GMT
- Title: Riemannian Flow Matching for Disentangled Graph Domain Adaptation
- Authors: Yingxu Wang, Xinwang Liu, Mengzhu Wang, Siyang Gao, Nan Yin,
- Abstract summary: Graph Domain Adaptation (GDA) typically uses adversarial learning to align graph embeddings in Euclidean space.<n>DisRFM is a geometry-aware GDA framework that unifies embedding and flow-based transport.
- Score: 51.98961391065951
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Domain Adaptation (GDA) typically uses adversarial learning to align graph embeddings in Euclidean space. However, this paradigm suffers from two critical challenges: Structural Degeneration, where hierarchical and semantic representations are entangled, and Optimization Instability, which arises from oscillatory dynamics of minimax adversarial training. To tackle these issues, we propose DisRFM, a geometry-aware GDA framework that unifies Riemannian embedding and flow-based transport. First, to overcome structural degeneration, we embed graphs into a Riemannian manifold. By adopting polar coordinates, we explicitly disentangle structure (radius) from semantics (angle). Then, we enforce topology preservation through radial Wasserstein alignment and semantic discrimination via angular clustering, thereby preventing feature entanglement and collapse. Second, we address the instability of adversarial alignment by using Riemannian flow matching. This method learns a smooth vector field to guide source features toward the target along geodesic paths, guaranteeing stable convergence. The geometric constraints further guide the flow to maintain the disentangled structure during transport. Theoretically, we prove the asymptotic stability of the flow matching and derive a tighter bound for the target risk. Extensive experiments demonstrate that DisRFM consistently outperforms state-of-the-art methods.
Related papers
- Axiomatic On-Manifold Shapley via Optimal Generative Flows [9.595059073171269]
Shapley-based attribution is critical for post-hoc XAI but suffers from off-manifold artifacts due to baselines.<n>We propose a formal theory of on-manifold Aumann-Shapley attributions driven by optimal generative flows.
arXiv Detail & Related papers (2026-03-05T12:05:20Z) - Path-Decoupled Hyperbolic Flow Matching for Few-Shot Adaptation [36.30669615593167]
We argue that Euclidean-based Flow Matching overlooks fundamental limitations of flat geometry.<n>We propose path-decoupled Hyperbolic Flow Matching, leveraging the Lorentz manifold's exponential expansion for trajectory decoupling.<n>Our codes and models will be released.
arXiv Detail & Related papers (2026-02-24T02:12:58Z) - Learning on the Manifold: Unlocking Standard Diffusion Transformers with Representation Encoders [48.68968421120471]
We show that standard diffusion transformers fail to converge on representations directly.<n>We identify Geometric Interference as the root cause.<n>Our method RJF enables the standard DiT-B architecture to converge effectively, achieving an FID of 3.37.
arXiv Detail & Related papers (2026-02-10T18:58:04Z) - Counterfactual Explanations on Robust Perceptual Geodesics [13.054357482525505]
We introduce Perceptual Counterfactual Geodesics (PCG), a method that constructs counterfactuals by tracing geodesics under a metric induced from robust vision features.<n>This geometry aligns with human perception and penalizes brittle directions, enabling smooth, on-manifold, semantically valid transitions.<n>Experiments on three vision datasets show that PCG outperforms baselines and reveals failure modes hidden under standard metrics.
arXiv Detail & Related papers (2026-01-26T16:52:54Z) - Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations [57.179679246370114]
We identify the distribution of random perturbations that minimizes the estimator's variance as the perturbation stepsize tends to zero.<n>Our findings reveal that such desired perturbations can align directionally with the true gradient, instead of maintaining a fixed length.
arXiv Detail & Related papers (2025-10-22T19:06:39Z) - Latent Iterative Refinement Flow: A Geometric-Constrained Approach for Few-Shot Generation [5.062604189239418]
We introduce Latent Iterative Refinement Flow (LIRF), a novel approach to few-shot generation.<n>LIRF establishes a stable latent space using an autoencoder trained with our novel textbfmanifold-preservation loss.<n>Within this cycle, candidate samples are refined by a geometric textbfcorrection operator, a provably contractive mapping.
arXiv Detail & Related papers (2025-09-24T08:57:21Z) - Decentralized Online Riemannian Optimization Beyond Hadamard Manifolds [9.940555460165545]
We analyze a curvature-aware geodesic step that enables a convergence beyond Hadamard linear distances.<n>We employ gradient, smoothing techniques, and we demonstrate $O(sqrtT)$ regret bound through the same subity analysis.
arXiv Detail & Related papers (2025-09-09T14:14:46Z) - Riemannian Variational Flow Matching for Material and Protein Design [37.328940532069424]
In Euclidean space, predicting endpoints (VFM), velocities (FM), or noise (diffusion) are largely equivalent due to affines.<n>On curved manifold, this equivalence breaks down, and we hypothesize that endpoint prediction provides a stronger learning signal.<n>Building on this insight, we derive a variational flow matching objective.<n> Experiments on synthetic spherical and hyperbolic benchmarks, as well as real-world tasks in material and protein generation, demonstrate that RG-VFM more effectively captures manifold structure.
arXiv Detail & Related papers (2025-02-18T16:02:10Z) - Riemannian Federated Learning via Averaging Gradient Streams [10.533809913888591]
Federated learning (FL) as a distributed learning paradigm has a significant advantage in addressing large-scale machine learning tasks.<n>This paper presents and analyzes a new efficient server aggregation -- averaging gradient streams, called RFedAGS.<n>We show that the proposed RFedAGS has global convergence and sublinear convergence rate under decaying step sizes cases.
arXiv Detail & Related papers (2024-09-11T12:28:42Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Last-Iterate Convergence of Adaptive Riemannian Gradient Descent for Equilibrium Computation [52.73824786627612]
This paper establishes new convergence results for textitgeodesic strongly monotone games.<n>Our key result shows that RGD attains last-iterate linear convergence in a textitgeometry-agnostic fashion.<n>Overall, this paper presents the first geometry-agnostic last-iterate convergence analysis for games beyond the Euclidean settings.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.