Hypergraph Neural Networks through the Lens of Message Passing: A Common
Perspective to Homophily and Architecture Design
- URL: http://arxiv.org/abs/2310.07684v2
- Date: Mon, 5 Feb 2024 12:45:15 GMT
- Title: Hypergraph Neural Networks through the Lens of Message Passing: A Common
Perspective to Homophily and Architecture Design
- Authors: Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga
Zaghen, Simone Scardapane, Pietro Lio
- Abstract summary: We introduce a novel conceptualization of homophily in higher-order networks based on a Message Passing scheme.
We investigate some natural, yet mostly unexplored, strategies for processing higher-order structures within HNNs.
- Score: 7.410655263764799
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most of the current hypergraph learning methodologies and benchmarking
datasets in the hypergraph realm are obtained by lifting procedures from their
graph analogs, leading to overshadowing specific characteristics of
hypergraphs. This paper attempts to confront some pending questions in that
regard: Q1 Can the concept of homophily play a crucial role in Hypergraph
Neural Networks (HNNs)? Q2 Is there room for improving current HNN
architectures by carefully addressing specific characteristics of higher-order
networks? Q3 Do existing datasets provide a meaningful benchmark for HNNs? To
address them, we first introduce a novel conceptualization of homophily in
higher-order networks based on a Message Passing (MP) scheme, unifying both the
analytical examination and the modeling of higher-order networks. Further, we
investigate some natural, yet mostly unexplored, strategies for processing
higher-order structures within HNNs such as keeping hyperedge-dependent node
representations, or performing node/hyperedge stochastic samplings, leading us
to the most general MP formulation up to date -MultiSet-, as well as to an
original architecture design, MultiSetMixer. Finally, we conduct an extensive
set of experiments that contextualize our proposals and successfully provide
insights about our inquiries.
Related papers
- Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Hypergraph Transformer for Semi-Supervised Classification [50.92027313775934]
We propose a novel hypergraph learning framework, HyperGraph Transformer (HyperGT)
HyperGT uses a Transformer-based neural network architecture to effectively consider global correlations among all nodes and hyperedges.
It achieves comprehensive hypergraph representation learning by effectively incorporating global interactions while preserving local connectivity patterns.
arXiv Detail & Related papers (2023-12-18T17:50:52Z) - Self-Supervised Pretraining for Heterogeneous Hypergraph Neural Networks [9.987252149421982]
We present a novel self-supervised pretraining framework for heterogeneous HyperGNNs.
Our method is able to effectively capture higher-order relations among entities in the data in a self-supervised manner.
Our experiments show that our proposed framework consistently outperforms state-of-the-art baselines in various downstream tasks.
arXiv Detail & Related papers (2023-11-19T16:34:56Z) - Topology-guided Hypergraph Transformer Network: Unveiling Structural Insights for Improved Representation [1.1606619391009658]
We propose a Topology-guided Hypergraph Transformer Network (THTN)
In this model, we first formulate a hypergraph from a graph while retaining its structural essence to learn higher-order relations within the graph.
We present a structure-aware self-attention mechanism that discovers the important nodes and hyperedges from both semantic and structural viewpoints.
arXiv Detail & Related papers (2023-10-14T20:08:54Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Supervised Hypergraph Reconstruction [3.69853388955692]
Many real-world systems involving high-order interactions are best encoded by hypergraphs.
Their datasets often end up being published or studied only in the form of their projections.
We propose supervised hypergraph reconstruction.
Our approach outperforms all baselines by an order of magnitude in accuracy on hard datasets.
arXiv Detail & Related papers (2022-11-23T23:15:03Z) - Equivariant Hypergraph Diffusion Neural Operators [81.32770440890303]
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide a promising way to model higher-order relations in data.
This work proposes a new HNN architecture named ED-HNN, which provably represents any continuous equivariant hypergraph diffusion operators.
We evaluate ED-HNN for node classification on nine real-world hypergraph datasets.
arXiv Detail & Related papers (2022-07-14T06:17:00Z) - Residual Enhanced Multi-Hypergraph Neural Network [26.42547421121713]
HyperGraph Neural Network (HGNN) is the de-facto method for hypergraph representation learning.
We propose the Residual enhanced Multi-Hypergraph Neural Network, which can fuse multi-modal information from each hypergraph effectively.
arXiv Detail & Related papers (2021-05-02T14:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.