Aligning the Unseen in Attributed Graphs: Interplay between Graph Geometry and Node Attributes Manifold
- URL: http://arxiv.org/abs/2601.22806v1
- Date: Fri, 30 Jan 2026 10:34:26 GMT
- Title: Aligning the Unseen in Attributed Graphs: Interplay between Graph Geometry and Node Attributes Manifold
- Authors: Aldric Labarthe, Roland Bouffanais, Julien Randon-Furling,
- Abstract summary: We introduce a custom variational autoencoder that separates manifold learning from structural alignment.<n>By quantifying the metric distortion needed to map the attribute manifold onto the graph's Heat Kernel, we transform geometric conflict into a structural descriptor.
- Score: 0.46976113832881716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The standard approach to representation learning on attributed graphs -- i.e., simultaneously reconstructing node attributes and graph structure -- is geometrically flawed, as it merges two potentially incompatible metric spaces. This forces a destructive alignment that erodes information about the graph's underlying generative process. To recover this lost signal, we introduce a custom variational autoencoder that separates manifold learning from structural alignment. By quantifying the metric distortion needed to map the attribute manifold onto the graph's Heat Kernel, we transform geometric conflict into an interpretable structural descriptor. Experiments show our method uncovers connectivity patterns and anomalies undetectable by conventional approaches, proving both their theoretical inadequacy and practical limitations.
Related papers
- GraphShaper: Geometry-aware Alignment for Improving Transfer Learning in Text-Attributed Graphs [16.624063216788638]
We introduce textbfGraphShaper, a geometry-aware framework that enhances graph encoding through multi-geometric specialization.<n>Our approach employs expert networks tailored to different geometric spaces, dynamically computing fusion weights to adaptively integrate geometric properties.<n>It achieves 9.47% accuracy improvements on citation networks and 7.63% on social networks in zero-shot settings.
arXiv Detail & Related papers (2025-10-14T02:48:50Z) - Graph Alignment via Dual-Pass Spectral Encoding and Latent Space Communication [31.43539830271355]
We propose a novel graph alignment framework that simultaneously enhances node distinctiveness and enforces geometric consistency across latent spaces.<n>Our approach introduces a dual-pass encoder that combines low-pass and high-pass spectral filters to generate embeddings that are both structure-aware and highly discriminative.
arXiv Detail & Related papers (2025-09-11T16:36:16Z) - Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification [81.06278257153835]
We propose a graph rewiring method that balances structural bottleneck reduction and graph property preservation.<n>Our method generates graphs with enhanced connectivity while maintaining sparsity and largely preserving the original graph spectrum.
arXiv Detail & Related papers (2025-06-19T08:01:00Z) - CurvGAD: Leveraging Curvature for Enhanced Graph Anomaly Detection [23.643189106137008]
We propose CurvGAD - a mixed-curvature graph autoencoder that introduces the notion of curvature-based geometric anomalies.<n>CurvGAD introduces two parallel pipelines for enhanced anomaly interpretability.<n>Experiments over 10 real-world datasets demonstrate an improvement of up to 6.5% over state-of-the-art GAD methods.
arXiv Detail & Related papers (2025-02-12T17:49:46Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z) - Conformal retrofitting via Riemannian manifolds: distilling
task-specific graphs into pretrained embeddings [1.2970250708769708]
Pretrained embeddings are versatile, task-agnostic feature representations of entities, like words, that are central to many machine learning applications.
Existing retrofitting algorithms face two limitations: they overfit the observed graph by failing to represent relationships with missing entities.
We propose a novel regularizer, a conformality regularizer, that preserves local geometry from the pretrained embeddings, and a new feedforward layer that learns to map pre-trained embeddings onto a non-Euclidean manifold.
arXiv Detail & Related papers (2020-10-09T23:06:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.