Co-Representation Neural Hypergraph Diffusion for Edge-Dependent Node Classification
- URL: http://arxiv.org/abs/2405.14286v1
- Date: Thu, 23 May 2024 08:01:25 GMT
- Title: Co-Representation Neural Hypergraph Diffusion for Edge-Dependent Node Classification
- Authors: Yijia Zheng, Marcel Worring,
- Abstract summary: In ENC, a node can have different labels across different hyperedges, which requires the modeling of node-hyperedge pairs instead of single nodes or hyperedges.
Existing solutions for this task are based on message passing and model within-edge and within-node interactions as multi-input single-output functions.
We develop CoNHD, a new solution based on hypergraph diffusion.
- Score: 14.548140012950187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hypergraphs are widely employed to represent complex higher-order relationships in real-world applications. Most hypergraph learning research focuses on node- or edge-level tasks. A practically relevant but more challenging task, edge-dependent node classification (ENC), is only recently proposed. In ENC, a node can have different labels across different hyperedges, which requires the modeling of node-hyperedge pairs instead of single nodes or hyperedges. Existing solutions for this task are based on message passing and model within-edge and within-node interactions as multi-input single-output functions. This brings three limitations: (1) non-adaptive representation size, (2) node/edge agnostic messages, and (3) insufficient interactions among nodes or hyperedges. To tackle these limitations, we develop CoNHD, a new solution based on hypergraph diffusion. Specifically, we first extend hypergraph diffusion using node-hyperedge co-representations. This extension explicitly models both within-edge and within-node interactions as multi-input multi-output functions using two equivariant diffusion operators. To avoid handcrafted regularization functions, we propose a neural implementation for the co-representation hypergraph diffusion process. Extensive experiments demonstrate the effectiveness and efficiency of the proposed CoNHD model.
Related papers
- NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Classification of Edge-dependent Labels of Nodes in Hypergraphs [17.454063924648896]
We introduce a classification of edge-dependent node labels as a new problem.
This problem can be used as a benchmark task for hypergraph neural networks.
We propose WHATsNet, a novel hypergraph neural network that represents the same node differently depending on the hyperedges it participates in.
arXiv Detail & Related papers (2023-06-05T16:50:34Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - From Node Interaction to Hop Interaction: New Effective and Scalable
Graph Learning Paradigm [25.959580336262004]
We propose a novel hop interaction paradigm to address limitations simultaneously.
The core idea is to convert the interaction target among nodes to pre-processed multi-hop features inside each node.
We conduct extensive experiments on 12 benchmark datasets in a wide range of domains, scales, and smoothness of graphs.
arXiv Detail & Related papers (2022-11-21T11:29:48Z) - Equivariant Hypergraph Diffusion Neural Operators [81.32770440890303]
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide a promising way to model higher-order relations in data.
This work proposes a new HNN architecture named ED-HNN, which provably represents any continuous equivariant hypergraph diffusion operators.
We evaluate ED-HNN for node classification on nine real-world hypergraph datasets.
arXiv Detail & Related papers (2022-07-14T06:17:00Z) - Multiplex Bipartite Network Embedding using Dual Hypergraph
Convolutional Networks [16.62391694987056]
We develop an unsupervised Dual HyperGraph Convolutional Network (DualHGCN) model scalably transforms the multiplex bipartite network into two sets of homogeneous hypergraphs.
We benchmark DualHGCN using four real-world datasets on link prediction and node classification tasks.
arXiv Detail & Related papers (2021-02-12T07:20:36Z) - NCGNN: Node-level Capsule Graph Neural Network [45.23653314235767]
Node-level Capsule Graph Neural Network (NCGNN) represents nodes as groups of capsules.
novel dynamic routing procedure is developed to adaptively select appropriate capsules for aggregation.
NCGNN can well address the over-smoothing issue and outperforms the state of the arts by producing better node embeddings for classification.
arXiv Detail & Related papers (2020-12-07T06:46:17Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.