Hypergraph Node Representation Learning with One-Stage Message Passing
- URL: http://arxiv.org/abs/2312.00336v1
- Date: Fri, 1 Dec 2023 04:10:00 GMT
- Title: Hypergraph Node Representation Learning with One-Stage Message Passing
- Authors: Shilin Qu, Weiqing Wang, Yuan-Fang Li, Xin Zhou, Fajie Yuan
- Abstract summary: We propose a novel one-stage message passing paradigm to model both global and local information propagation for hypergraphs.
We integrate this paradigm into HGraphormer, a Transformer-based framework for hypergraph node representation learning.
- Score: 28.311325846217574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hypergraphs as an expressive and general structure have attracted
considerable attention from various research domains. Most existing hypergraph
node representation learning techniques are based on graph neural networks, and
thus adopt the two-stage message passing paradigm (i.e. node -> hyperedge ->
node). This paradigm only focuses on local information propagation and does not
effectively take into account global information, resulting in less optimal
representations. Our theoretical analysis of representative two-stage message
passing methods shows that, mathematically, they model different ways of local
message passing through hyperedges, and can be unified into one-stage message
passing (i.e. node -> node). However, they still only model local information.
Motivated by this theoretical analysis, we propose a novel one-stage message
passing paradigm to model both global and local information propagation for
hypergraphs. We integrate this paradigm into HGraphormer, a Transformer-based
framework for hypergraph node representation learning. HGraphormer injects the
hypergraph structure information (local information) into Transformers (global
information) by combining the attention matrix and hypergraph Laplacian.
Extensive experiments demonstrate that HGraphormer outperforms recent
hypergraph learning methods on five representative benchmark datasets on the
semi-supervised hypernode classification task, setting new state-of-the-art
performance, with accuracy improvements between 2.52% and 6.70%. Our code and
datasets are available.
Related papers
- Hypergraph Transformer for Semi-Supervised Classification [50.92027313775934]
We propose a novel hypergraph learning framework, HyperGraph Transformer (HyperGT)
HyperGT uses a Transformer-based neural network architecture to effectively consider global correlations among all nodes and hyperedges.
It achieves comprehensive hypergraph representation learning by effectively incorporating global interactions while preserving local connectivity patterns.
arXiv Detail & Related papers (2023-12-18T17:50:52Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - DualHGNN: A Dual Hypergraph Neural Network for Semi-Supervised Node
Classification based on Multi-View Learning and Density Awareness [3.698434507617248]
Graph-based semi-supervised node classification has been shown to become a state-of-the-art approach in many applications with high research value and significance.
This paper proposes the Dual Hypergraph Neural Network (DualHGNN), a new dual connection model integrating both hypergraph structure learning and hypergraph representation learning simultaneously in a unified architecture.
arXiv Detail & Related papers (2023-06-07T07:40:04Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Message Passing Neural Networks for Hypergraphs [6.999112784624749]
We present the first graph neural network based on message passing capable of processing hypergraph-structured data.
We show that the proposed model defines a design space for neural network models for hypergraphs, thus generalizing existing models for hypergraphs.
arXiv Detail & Related papers (2022-03-31T12:38:22Z) - Hypergraph Convolutional Networks via Equivalency between Hypergraphs
and Undirected Graphs [59.71134113268709]
We present General Hypergraph Spectral Convolution(GHSC), a general learning framework that can handle EDVW and EIVW hypergraphs.
In this paper, we show that the proposed framework can achieve state-of-the-art performance.
Experiments from various domains including social network analysis, visual objective classification, protein learning demonstrate that the proposed framework can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-31T10:46:47Z) - Adaptive Neural Message Passing for Inductive Learning on Hypergraphs [21.606287447052757]
We present HyperMSG, a novel hypergraph learning framework.
It adapts to the data and task by learning an attention weight associated with each node's degree centrality.
It is robust and outperforms state-of-the-art hypergraph learning methods on a wide range of tasks and datasets.
arXiv Detail & Related papers (2021-09-22T12:24:02Z) - HyperSAGE: Generalizing Inductive Representation Learning on Hypergraphs [24.737560790401314]
We present HyperSAGE, a novel hypergraph learning framework that uses a two-level neural message passing strategy to accurately and efficiently propagate information through hypergraphs.
We show that HyperSAGE outperforms state-of-the-art hypergraph learning methods on representative benchmark datasets.
arXiv Detail & Related papers (2020-10-09T13:28:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.