Equivariant Hypergraph Neural Networks
- URL: http://arxiv.org/abs/2208.10428v1
- Date: Mon, 22 Aug 2022 16:28:38 GMT
- Title: Equivariant Hypergraph Neural Networks
- Authors: Jinwoo Kim, Saeyoon Oh, Sungjun Cho, Seunghoon Hong
- Abstract summary: Recent approaches for hypergraph learning extend graph neural networks based on message passing, which is simple yet fundamentally limited in modeling long-range dependencies and expressive power.
We present Equivariant Hypergraph Neural Network (EHNN), the first attempt to realize maximally expressive equivariant layers for general hypergraph learning.
We demonstrate their capability in a range of hypergraph learning problems, including synthetic k-edge identification, semi-supervised classification, and visual keypoint matching, and report improved performances over strong message passing baselines.
- Score: 15.096429849049953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many problems in computer vision and machine learning can be cast as learning
on hypergraphs that represent higher-order relations. Recent approaches for
hypergraph learning extend graph neural networks based on message passing,
which is simple yet fundamentally limited in modeling long-range dependencies
and expressive power. On the other hand, tensor-based equivariant neural
networks enjoy maximal expressiveness, but their application has been limited
in hypergraphs due to heavy computation and strict assumptions on fixed-order
hyperedges. We resolve these problems and present Equivariant Hypergraph Neural
Network (EHNN), the first attempt to realize maximally expressive equivariant
layers for general hypergraph learning. We also present two practical
realizations of our framework based on hypernetworks (EHNN-MLP) and
self-attention (EHNN-Transformer), which are easy to implement and
theoretically more expressive than most message passing approaches. We
demonstrate their capability in a range of hypergraph learning problems,
including synthetic k-edge identification, semi-supervised classification, and
visual keypoint matching, and report improved performances over strong message
passing baselines. Our implementation is available at
https://github.com/jw9730/ehnn.
Related papers
- Hypergraph-MLP: Learning on Hypergraphs without Message Passing [41.43504601820411]
Many hypergraph neural networks leverage message passing over hypergraph structures to enhance node representation learning.
We propose an alternative approach where we integrate the information about hypergraph structures into training supervision without explicit message passing.
Specifically, we introduce Hypergraph-MLP, a novel learning framework for hypergraph-structured data.
arXiv Detail & Related papers (2023-12-15T13:30:04Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - On the Expressiveness and Generalization of Hypergraph Neural Networks [77.65788763444877]
This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs)
Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes.
arXiv Detail & Related papers (2023-03-09T18:42:18Z) - Equivariant Hypergraph Diffusion Neural Operators [81.32770440890303]
Hypergraph neural networks (HNNs) using neural networks to encode hypergraphs provide a promising way to model higher-order relations in data.
This work proposes a new HNN architecture named ED-HNN, which provably represents any continuous equivariant hypergraph diffusion operators.
We evaluate ED-HNN for node classification on nine real-world hypergraph datasets.
arXiv Detail & Related papers (2022-07-14T06:17:00Z) - Hypergraph Convolutional Networks via Equivalency between Hypergraphs
and Undirected Graphs [59.71134113268709]
We present General Hypergraph Spectral Convolution(GHSC), a general learning framework that can handle EDVW and EIVW hypergraphs.
In this paper, we show that the proposed framework can achieve state-of-the-art performance.
Experiments from various domains including social network analysis, visual objective classification, protein learning demonstrate that the proposed framework can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-31T10:46:47Z) - HyperSAGE: Generalizing Inductive Representation Learning on Hypergraphs [24.737560790401314]
We present HyperSAGE, a novel hypergraph learning framework that uses a two-level neural message passing strategy to accurately and efficiently propagate information through hypergraphs.
We show that HyperSAGE outperforms state-of-the-art hypergraph learning methods on representative benchmark datasets.
arXiv Detail & Related papers (2020-10-09T13:28:06Z) - Molecule Property Prediction and Classification with Graph Hypernetworks [113.38181979662288]
We show that the replacement of the underlying networks with hypernetworks leads to a boost in performance.
A major difficulty in the application of hypernetworks is their lack of stability.
A recent work has tackled the training instability of hypernetworks in the context of error correcting codes.
arXiv Detail & Related papers (2020-02-01T16:44:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.