Learning Topology-Driven Multi-Subspace Fusion for Grassmannian Deep Network
- URL: http://arxiv.org/abs/2511.08628v2
- Date: Fri, 14 Nov 2025 04:39:42 GMT
- Title: Learning Topology-Driven Multi-Subspace Fusion for Grassmannian Deep Network
- Authors: Xuan Yu, Tianyang Xu,
- Abstract summary: Grassmannian manifold offers a powerful carrier for geometric representation learning.<n>We propose a topology-driven multi-subspace fusion framework that enables adaptive subspace collaboration on the Grassmannian.<n>Our work advances geometric deep learning and adapts the proven multi-channel interaction philosophy of Euclidean networks to non-Euclidean domains.
- Score: 31.003374497881968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grassmannian manifold offers a powerful carrier for geometric representation learning by modelling high-dimensional data as low-dimensional subspaces. However, existing approaches predominantly rely on static single-subspace representations, neglecting the dynamic interplay between multiple subspaces critical for capturing complex geometric structures. To address this limitation, we propose a topology-driven multi-subspace fusion framework that enables adaptive subspace collaboration on the Grassmannian. Our solution introduces two key innovations: (1) Inspired by the Kolmogorov-Arnold representation theorem, an adaptive multi-subspace modelling mechanism is proposed that dynamically selects and weights task-relevant subspaces via topological convergence analysis, and (2) a multi-subspace interaction block that fuses heterogeneous geometric representations through Fréchet mean optimisation on the manifold. Theoretically, we establish the convergence guarantees of adaptive subspaces under a projection metric topology, ensuring stable gradient-based optimisation. Practically, we integrate Riemannian batch normalisation and mutual information regularisation to enhance discriminability and robustness. Extensive experiments on 3D action recognition (HDM05, FPHA), EEG classification (MAMEM-SSVEPII), and graph tasks demonstrate state-of-the-art performance. Our work not only advances geometric deep learning but also successfully adapts the proven multi-channel interaction philosophy of Euclidean networks to non-Euclidean domains, achieving superior discriminability and interpretability.
Related papers
- Riemannian Langevin Dynamics: Strong Convergence of Geometric Euler-Maruyama Scheme [51.56484100374058]
Low-dimensional structure in real-world data plays an important role in the success of generative models.<n>We prove convergence theory of numerical schemes for manifold-valued differential equations.
arXiv Detail & Related papers (2026-03-04T01:29:35Z) - Multivariate Time Series Forecasting with Hybrid Euclidean-SPD Manifold Graph Neural Networks [31.893767537160258]
We propose a graph neural network-based model that captures data geometry within a hybridean-Riemannian framework.<n>HSMGNN achieves up to a 13.8 percent improvement over state-of-the-art baselines in forecasting accuracy.
arXiv Detail & Related papers (2025-12-16T02:42:03Z) - CUS-GS: A Compact Unified Structured Gaussian Splatting Framework for Multimodal Scene Representation [16.85102888388904]
CUS-GS is a compact unified structured Gaussian Splatting representation.<n>We propose a feature-aware significance evaluation strategy to guide anchor growing and pruning.<n>CUS-GS achieves competitive performance compared to state-of-the-art methods using as few as 6M parameters.
arXiv Detail & Related papers (2025-11-22T03:42:49Z) - The Neural Differential Manifold: An Architecture with Explicit Geometric Structure [8.201374511929538]
This paper introduces the Neural Differential Manifold (NDM), a novel neural network architecture that explicitly incorporates geometric structure into its fundamental design.<n>We analyze the theoretical advantages of this approach, including its potential for more efficient optimization, enhanced continual learning, and applications in scientific discovery and controllable generative modeling.
arXiv Detail & Related papers (2025-10-29T02:24:27Z) - Riemannian Consistency Model [57.933800575074535]
We propose the Riemannian Consistency Model (RCM), which, for the first time, enables few-step consistency modeling.<n>We derive the closed-form solutions for both discrete- and continuous-time training objectives for RCM.<n>We provide a unique kinematics perspective for interpreting the RCM objective, offering new theoretical angles.
arXiv Detail & Related papers (2025-10-01T14:57:25Z) - High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations [51.90920900332569]
Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data.<n>Recent approaches address this by introducing additional features along rigid geometric structures.<n>We propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR)
arXiv Detail & Related papers (2025-06-07T16:45:17Z) - Cross-Modal and Uncertainty-Aware Agglomeration for Open-Vocabulary 3D Scene Understanding [58.38294408121273]
We propose Cross-modal and Uncertainty-aware Agglomeration for Open-vocabulary 3D Scene Understanding dubbed CUA-O3D.<n>Our method addresses two key challenges: (1) incorporating semantic priors from VLMs alongside the geometric knowledge of spatially-aware vision foundation models, and (2) using a novel deterministic uncertainty estimation to capture model-specific uncertainties.
arXiv Detail & Related papers (2025-03-20T20:58:48Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - From Semantics to Hierarchy: A Hybrid Euclidean-Tangent-Hyperbolic Space Model for Temporal Knowledge Graph Reasoning [1.1372536310854844]
Temporal knowledge graph (TKG) reasoning predicts future events based on historical data.
Existing Euclidean models excel at capturing semantics but struggle with hierarchy.
We propose a novel hybrid geometric space approach that leverages the strengths of both Euclidean and hyperbolic models.
arXiv Detail & Related papers (2024-08-30T10:33:08Z) - Knowledge-based Multiple Adaptive Spaces Fusion for Recommendation [35.20583774988951]
We propose a knowledge-based multiple adaptive spaces fusion method for recommendation, namely MCKG.
Unlike existing methods that solely adopt a specific manifold, we introduce the unified space that is compatible with hyperbolic, euclidean and spherical spaces.
In addition, we propose a geometry-aware optimization strategy which enables the pull and push processes benefited from both hyperbolic and spherical spaces.
arXiv Detail & Related papers (2023-08-29T12:11:16Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Deep Diversity-Enhanced Feature Representation of Hyperspectral Images [87.47202258194719]
We rectify 3D convolution by modifying its topology to enhance the rank upper-bound.
We also propose a novel diversity-aware regularization (DA-Reg) term that acts on the feature maps to maximize independence among elements.
To demonstrate the superiority of the proposed Re$3$-ConvSet and DA-Reg, we apply them to various HS image processing and analysis tasks.
arXiv Detail & Related papers (2023-01-15T16:19:18Z) - Machine Learning and Polymer Self-Consistent Field Theory in Two Spatial
Dimensions [0.491574468325115]
A computational framework that leverages data from self-consistent field theory simulations with deep learning is presented.
A generative adversarial network (GAN) is introduced to efficiently and accurately predict saddle point, local average monomer density fields.
This GAN approach yields important savings of both memory and computational cost.
arXiv Detail & Related papers (2022-12-16T04:30:16Z) - Manifold Topology Divergence: a Framework for Comparing Data Manifolds [109.0784952256104]
We develop a framework for comparing data manifold, aimed at the evaluation of deep generative models.
Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence)
We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance.
arXiv Detail & Related papers (2021-06-08T00:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.