Simplifying Hypergraph Neural Networks
- URL: http://arxiv.org/abs/2402.05569v3
- Date: Wed, 22 May 2024 11:05:40 GMT
- Title: Simplifying Hypergraph Neural Networks
- Authors: Bohan Tang, Zexi Liu, Keyue Jiang, Siheng Chen, Xiaowen Dong,
- Abstract summary: Hypergraph neural networks (HNNs) effectively utilise these structures by message passing to generate node features.
We propose an alternative approach by decoupling the usage of the hypergraph structural information from the model training stage.
The proposed model, simplified hypergraph neural network (SHNN), contains a training-free message-passing block that can be precomputed before the training of SHNN.
- Score: 35.35391968349657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hypergraphs are crucial for modeling higher-order interactions in real-world data. Hypergraph neural networks (HNNs) effectively utilise these structures by message passing to generate informative node features for various downstream tasks like node classification. However, the message passing block in existing HNNs typically requires a computationally intensive training process, which limits their practical use. To tackle this challenge, we propose an alternative approach by decoupling the usage of the hypergraph structural information from the model training stage. The proposed model, simplified hypergraph neural network (SHNN), contains a training-free message-passing block that can be precomputed before the training of SHNN, thereby reducing the computational burden. We theoretically support the efficiency and effectiveness of SHNN by showing that: 1) It is more training-efficient compared to existing HNNs; 2) It utilises as much information as existing HNNs for node feature generation; and 3) It is robust against the oversmoothing issue while using long-range interactions. Experiments based on six real-world hypergraph benchmarks in node classification and hyperlink prediction present that, compared to state-of-the-art HNNs, SHNN shows both competitive performance and superior training efficiency. Specifically, on Cora-CA, SHNN achieves the highest node classification accuracy with just 2% training time of the best baseline.
Related papers
- Training Graph Neural Networks Using Non-Robust Samples [2.1937382384136637]
Graph Neural Networks (GNNs) are highly effective neural networks for processing graph -- structured data.
GNNs leverage both the graph structure, which represents the relationships between data points, and the feature matrix of the data to optimize their feature representation.
This paper proposes a novel method for selecting noise-sensitive training samples from the original training set to construct a smaller yet more effective training set for model training.
arXiv Detail & Related papers (2024-12-19T11:10:48Z) - Molecular Hypergraph Neural Networks [1.4559839293730863]
Graph neural networks (GNNs) have demonstrated promising performance across various chemistry-related tasks.
We introduce molecular hypergraphs and propose Molecular Hypergraph Neural Networks (MHNN) to predict the optoelectronic properties of organic semiconductors.
MHNN outperforms all baseline models on most tasks of OPV, OCELOTv1 and PCQM4Mv2 datasets.
arXiv Detail & Related papers (2023-12-20T15:56:40Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One [60.5818387068983]
Graph neural networks (GNN) suffer from severe inefficiency.
We propose to decouple a multi-layer GNN as multiple simple modules for more efficient training.
We show that the proposed framework is highly efficient with reasonable performance.
arXiv Detail & Related papers (2023-04-20T07:21:32Z) - Online Cross-Layer Knowledge Distillation on Graph Neural Networks with
Deep Supervision [6.8080936803807734]
Graph neural networks (GNNs) have become one of the most popular research topics in both academia and industry.
Large-scale datasets are posing great challenges for deploying GNNs in edge devices with limited resources.
We propose a novel online knowledge distillation framework called Alignahead++ in this paper.
arXiv Detail & Related papers (2022-10-25T03:21:20Z) - Equivariant Hypergraph Diffusion Neural Operators [81.32770440890303]
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide a promising way to model higher-order relations in data.
This work proposes a new HNN architecture named ED-HNN, which provably represents any continuous equivariant hypergraph diffusion operators.
We evaluate ED-HNN for node classification on nine real-world hypergraph datasets.
arXiv Detail & Related papers (2022-07-14T06:17:00Z) - Strengthening the Training of Convolutional Neural Networks By Using
Walsh Matrix [0.0]
We have modified the training and structure of DNN to increase the classification performance.
A minimum distance network (MDN) following the last layer of the convolutional neural network (CNN) is used as the classifier.
In different areas, it has been observed that a higher classification performance was obtained by using the DivFE with less number of nodes.
arXiv Detail & Related papers (2021-03-31T18:06:11Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.