Trivial bundle embeddings for learning graph representations
- URL: http://arxiv.org/abs/2112.02531v1
- Date: Sun, 5 Dec 2021 10:26:46 GMT
- Title: Trivial bundle embeddings for learning graph representations
- Authors: Zheng Xie, Xiaojing Zuo, Yiping Song
- Abstract summary: We propose an inductive model that learns inductive node representations for networks with or without node features.
In practice, it reduces errors for link prediction and node classification when compared to the Euclidean and hyperbolic GCNs.
- Score: 9.070194145842489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding real-world networks presents challenges because it is not clear how
to identify their latent geometries. Embedding some disassortative networks,
such as scale-free networks, to the Euclidean space has been shown to incur
distortions. Embedding scale-free networks to hyperbolic spaces offer an
exciting alternative but incurs distortions when embedding assortative networks
with latent geometries not hyperbolic. We propose an inductive model that
leverages both the expressiveness of GCNs and trivial bundle to learn inductive
node representations for networks with or without node features. A trivial
bundle is a simple case of fiber bundles,a space that is globally a product
space of its base space and fiber. The coordinates of base space and those of
fiber can be used to express the assortative and disassortative factors in
generating edges. Therefore, the model has the ability to learn embeddings that
can express those factors. In practice, it reduces errors for link prediction
and node classification when compared to the Euclidean and hyperbolic GCNs.
Related papers
- Bundle Neural Networks for message diffusion on graphs [10.018379001231356]
We show that Bundle Neural Networks (BuNNs) can approximate any feature transformation over nodes on any graphs given injective positional encodings.
We also prove that BuNNs can approximate any feature transformation over nodes on any family of graphs given injective positional encodings, resulting in universal node-level expressivity.
arXiv Detail & Related papers (2024-05-24T13:28:48Z) - Non-Euclidean Spatial Graph Neural Network [13.569970309961777]
A novel message-passing-based neural network is proposed to combine graph topology and spatial geometry.
We theoretically guarantee that the learned representations are provably invariant to important symmetries such as rotation or translation.
arXiv Detail & Related papers (2023-12-17T20:21:33Z) - Alignment and Outer Shell Isotropy for Hyperbolic Graph Contrastive
Learning [69.6810940330906]
We propose a novel contrastive learning framework to learn high-quality graph embedding.
Specifically, we design the alignment metric that effectively captures the hierarchical data-invariant information.
We show that in the hyperbolic space one has to address the leaf- and height-level uniformity which are related to properties of trees.
arXiv Detail & Related papers (2023-10-27T15:31:42Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Unification Framework for Euclidean and Hyperbolic Graph Neural
Networks [8.080621697426997]
Hyperbolic neural networks can effectively capture the inherent hierarchy of graph datasets.
They entangle multiple incongruent (gyro-)vector spaces within a layer, which makes them limited in terms of generalization and scalability.
We propose the Poincare disk model as our search space, and apply all approximations on the disk.
We demonstrate that our model not only leverages the power of Euclidean networks such as interpretability and efficient execution of various model components, but also outperforms both Euclidean and hyperbolic counterparts on various benchmarks.
arXiv Detail & Related papers (2022-06-09T05:33:02Z) - Geometric Graph Representation Learning via Maximizing Rate Reduction [73.6044873825311]
Learning node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
We propose Geometric Graph Representation Learning (G2R) to learn node representations in an unsupervised manner.
G2R maps nodes in distinct groups into different subspaces, while each subspace is compact and different subspaces are dispersed.
arXiv Detail & Related papers (2022-02-13T07:46:24Z) - Nested Hyperbolic Spaces for Dimensionality Reduction and Hyperbolic NN
Design [8.250374560598493]
Hyperbolic neural networks have been popular in the recent past due to their ability to represent hierarchical data sets effectively and efficiently.
The challenge in developing these networks lies in the nonlinearity of the embedding space namely, the Hyperbolic space.
We present a novel fully hyperbolic neural network which uses the concept of projections (embeddings) followed by an intrinsic aggregation and a nonlinearity all within the hyperbolic space.
arXiv Detail & Related papers (2021-12-03T03:20:27Z) - Fully Hyperbolic Neural Networks [63.22521652077353]
We propose a fully hyperbolic framework to build hyperbolic networks based on the Lorentz model.
We show that our method has better performance for building both shallow and deep networks.
arXiv Detail & Related papers (2021-05-31T03:36:49Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.