Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks
- URL: http://arxiv.org/abs/2305.18965v1
- Date: Tue, 30 May 2023 11:53:40 GMT
- Title: Node Embedding from Neural Hamiltonian Orbits in Graph Neural Networks
- Authors: Qiyu Kang and Kai Zhao and Yang Song and Sijie Wang and Wee Peng Tay
- Abstract summary: In this paper, we model the embedding update of a node feature as a Hamiltonian orbit over time.
Our proposed node embedding strategy can automatically learn, without extensive tuning, the underlying geometry of any given graph dataset.
Numerical experiments demonstrate that our approach adapts better to different types of graph datasets than popular state-of-the-art graph node embedding GNNs.
- Score: 33.88288279902204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the graph node embedding problem, embedding spaces can vary significantly
for different data types, leading to the need for different GNN model types. In
this paper, we model the embedding update of a node feature as a Hamiltonian
orbit over time. Since the Hamiltonian orbits generalize the exponential maps,
this approach allows us to learn the underlying manifold of the graph in
training, in contrast to most of the existing literature that assumes a fixed
graph embedding manifold with a closed exponential map solution. Our proposed
node embedding strategy can automatically learn, without extensive tuning, the
underlying geometry of any given graph dataset even if it has diverse
geometries. We test Hamiltonian functions of different forms and verify the
performance of our approach on two graph node embedding downstream tasks: node
classification and link prediction. Numerical experiments demonstrate that our
approach adapts better to different types of graph datasets than popular
state-of-the-art graph node embedding GNNs. The code is available at
\url{https://github.com/zknus/Hamiltonian-GNN}.
Related papers
- You do not have to train Graph Neural Networks at all on text-attributed graphs [25.044734252779975]
We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace.
Our experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts.
arXiv Detail & Related papers (2024-04-17T02:52:11Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Node Embedding from Hamiltonian Information Propagation in Graph Neural
Networks [30.42111062496152]
We propose a novel graph information propagation strategy called Hamiltonian Dynamic GNN (HDG)
HDG uses a Hamiltonian mechanics approach to learn node embeddings in a graph.
We demonstrate the ability of HDG to automatically learn the underlying geometry of graph datasets.
arXiv Detail & Related papers (2023-03-02T07:40:40Z) - SHGNN: Structure-Aware Heterogeneous Graph Neural Network [77.78459918119536]
This paper proposes a novel Structure-Aware Heterogeneous Graph Neural Network (SHGNN) to address the above limitations.
We first utilize a feature propagation module to capture the local structure information of intermediate nodes in the meta-path.
Next, we use a tree-attention aggregator to incorporate the graph structure information into the aggregation module on the meta-path.
Finally, we leverage a meta-path aggregator to fuse the information aggregated from different meta-paths.
arXiv Detail & Related papers (2021-12-12T14:18:18Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Graph Entropy Guided Node Embedding Dimension Selection for Graph Neural
Networks [74.26734952400925]
We propose a novel Minimum Graph Entropy (MinGE) algorithm for Node Embedding Dimension Selection (NEDS)
MinGE considers both feature entropy and structure entropy on graphs, which are carefully designed according to the characteristics of the rich information in them.
Experiments with popular Graph Neural Networks (GNNs) on benchmark datasets demonstrate the effectiveness and generalizability of our proposed MinGE.
arXiv Detail & Related papers (2021-05-07T11:40:29Z) - Permutation-Invariant Variational Autoencoder for Graph-Level
Representation Learning [0.0]
We propose a permutation-invariant variational autoencoder for graph structured data.
Our model indirectly learns to match the node ordering of input and output graph, without imposing a particular node ordering.
We demonstrate the effectiveness of our proposed model on various graph reconstruction and generation tasks.
arXiv Detail & Related papers (2021-04-20T09:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.