Hypergraph Neural Sheaf Diffusion: A Symmetric Simplicial Set Framework for Higher-Order Learning
- URL: http://arxiv.org/abs/2505.05702v3
- Date: Wed, 30 Jul 2025 18:24:32 GMT
- Title: Hypergraph Neural Sheaf Diffusion: A Symmetric Simplicial Set Framework for Higher-Order Learning
- Authors: Seongjin Choi, Gahee Kim, Yong-Geun Oh,
- Abstract summary: We introduce Hypergraph Neural Sheaf Diffusion (HNSD), the first principled extension of neural sheaf diffusion to hypergraphs.<n>HNSD operates via normalized degree zero sheaf Laplacian over symmetric simplicial lifting, resolving orientation ambiguity and adjacency sparsity inherent to hypergraph learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The absence of intrinsic adjacency relations and orientation systems in hypergraphs creates fundamental challenges for constructing sheaf Laplacians of arbitrary degrees. We resolve these limitations through symmetric simplicial sets derived directly from hypergraphs, called symmetric simplicial lifting, which encode all possible oriented subrelations within each hyperedge as ordered tuples. This construction canonically defines adjacency via facet maps while inherently preserving hyperedge provenance. We establish that the normalized degree zero sheaf Laplacian on our symmetric simplicial lifting reduces exactly to the traditional graph normalized sheaf Laplacian when restricted to graphs, validating its mathematical consistency with prior graph-based sheaf theory. Furthermore, the induced structure preserves all structural information from the original hypergraph, ensuring that every multi-way relational detail is faithfully retained. Leveraging this framework, we introduce Hypergraph Neural Sheaf Diffusion (HNSD), the first principled extension of neural sheaf diffusion to hypergraphs. HNSD operates via normalized degree zero sheaf Laplacian over symmetric simplicial lifting, resolving orientation ambiguity and adjacency sparsity inherent to hypergraph learning. Experimental evaluations demonstrate HNSDs competitive performance across established benchmarks.
Related papers
- Lower Ricci Curvature for Hypergraphs [3.9965784551765697]
We introduce hypergraph lower curvature (HLRC), a novel curvature metric defined in closed form that achieves a principled balance between interpretability and efficiency.<n>HLRC consistently reveals meaningful higher-order organization, distinguishing intra-community hyperedges, uncovering latent semantic labels, tracking temporal dynamics, and supporting robust clustering of hypergraphs based on global structure.
arXiv Detail & Related papers (2025-06-04T13:32:09Z) - Scalable Hypergraph Structure Learning with Diverse Smoothness Priors [7.559720049837459]
We propose a novel hypergraph learning method that recovers a hypergraph from time-series signals based on a smoothness prior.<n>Our approach, designated as Hypergraph Structure Learning with Smoothness (HSLS), addresses key limitations in prior works.<n>Our method becomes scalable in increasingly complex network structures.
arXiv Detail & Related papers (2025-04-04T16:47:30Z) - A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs [54.62268052283014]
We present a unified theoretical perspective based on the framework of signed graphs.<n>We show that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing.<n>We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Revealing Decurve Flows for Generalized Graph Propagation [108.80758541147418]
This study addresses the limitations of the traditional analysis of message-passing, central to graph learning, by defining em textbfgeneralized propagation with directed and weighted graphs.
We include a preliminary exploration of learned propagation patterns in datasets, a first in the field.
arXiv Detail & Related papers (2024-02-13T14:13:17Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Tensorized Hypergraph Neural Networks [69.65385474777031]
We propose a novel adjacency-tensor-based textbfTensorized textbfHypergraph textbfNeural textbfNetwork (THNN)
THNN is faithful hypergraph modeling framework through high-order outer product feature passing message.
Results from experiments on two widely used hypergraph datasets for 3-D visual object classification show the model's promising performance.
arXiv Detail & Related papers (2023-06-05T03:26:06Z) - Self-organization Preserved Graph Structure Learning with Principle of
Relevant Information [72.83485174169027]
PRI-GSL is a Graph Structure Learning framework for identifying the self-organization and revealing the hidden structure.
PRI-GSL learns a structure that contains the most relevant yet least redundant information quantified by von Neumann entropy and Quantum Jensen-Shannon divergence.
arXiv Detail & Related papers (2022-12-30T16:02:02Z) - Hypergraph Convolutional Networks via Equivalency between Hypergraphs
and Undirected Graphs [59.71134113268709]
We present General Hypergraph Spectral Convolution(GHSC), a general learning framework that can handle EDVW and EIVW hypergraphs.
In this paper, we show that the proposed framework can achieve state-of-the-art performance.
Experiments from various domains including social network analysis, visual objective classification, protein learning demonstrate that the proposed framework can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-31T10:46:47Z) - Learnable Hypergraph Laplacian for Hypergraph Learning [34.28748027233654]
HyperGraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph structured data.
We propose the first learning-based method tailored for constructing adaptive hypergraph structure, termed HypERgrAph Laplacian aDaptor (HERALD)
HERALD adaptively optimize the adjacency relationship between hypernodes and hyperedges in an end-to-end manner and thus the task-aware hypergraph is learned.
arXiv Detail & Related papers (2021-06-12T02:07:07Z) - Learnable Hypergraph Laplacian for Hypergraph Learning [34.28748027233654]
HyperGraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph structured data.
We propose the first learning-based method tailored for constructing adaptive hypergraph structure, termed HypERgrAph Laplacian aDaptor (HERALD)
HERALD adaptively optimize the adjacency relationship between hypernodes and hyperedges in an end-to-end manner and thus the task-aware hypergraph is learned.
arXiv Detail & Related papers (2021-06-10T12:37:55Z) - Learning over Families of Sets -- Hypergraph Representation Learning for
Higher Order Tasks [12.28143554382742]
We develop a hypergraph neural network to learn provably expressive representations of variable sized hyperedges.
We evaluate performance on multiple real-world hypergraph datasets and demonstrate consistent, significant improvement in accuracy, over state-of-the-art models.
arXiv Detail & Related papers (2021-01-19T18:37:50Z) - Semi-supervised Hypergraph Node Classification on Hypergraph Line
Expansion [7.933465724913661]
We propose a new hypergraph formulation named the emphline expansion (LE) for hypergraphs learning.
The proposed emphline expansion makes existing graph learning algorithms compatible with the higher-order structure.
We evaluate the proposed line expansion on five hypergraph datasets, the results show that our method beats SOTA baselines by a significant margin.
arXiv Detail & Related papers (2020-05-11T03:02:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.