On the Expressiveness and Generalization of Hypergraph Neural Networks
- URL: http://arxiv.org/abs/2303.05490v1
- Date: Thu, 9 Mar 2023 18:42:18 GMT
- Title: On the Expressiveness and Generalization of Hypergraph Neural Networks
- Authors: Zhezheng Luo, Jiayuan Mao, Joshua B. Tenenbaum, Leslie Pack Kaelbling
- Abstract summary: This extended abstract describes a framework for analyzing the expressiveness, learning, and (structural) generalization of hypergraph neural networks (HyperGNNs)
Specifically, we focus on how HyperGNNs can learn from finite datasets and generalize structurally to graph reasoning problems of arbitrary input sizes.
- Score: 77.65788763444877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This extended abstract describes a framework for analyzing the
expressiveness, learning, and (structural) generalization of hypergraph neural
networks (HyperGNNs). Specifically, we focus on how HyperGNNs can learn from
finite datasets and generalize structurally to graph reasoning problems of
arbitrary input sizes. Our first contribution is a fine-grained analysis of the
expressiveness of HyperGNNs, that is, the set of functions that they can
realize. Our result is a hierarchy of problems they can solve, defined in terms
of various hyperparameters such as depths and edge arities. Next, we analyze
the learning properties of these neural networks, especially focusing on how
they can be trained on a finite set of small graphs and generalize to larger
graphs, which we term structural generalization. Our theoretical results are
further supported by the empirical results.
Related papers
- Towards Bridging Generalization and Expressivity of Graph Neural Networks [11.560730203511111]
We study the relationship between expressivity and generalization in graph neural networks (GNNs)
We introduce a novel framework that connects GNN generalization to the variance in graph structures they can capture.
We uncover a trade-off between intra-class concentration and inter-class separation, both of which are crucial for effective generalization.
arXiv Detail & Related papers (2024-10-14T00:31:16Z) - Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - What functions can Graph Neural Networks compute on random graphs? The
role of Positional Encoding [0.0]
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power.
Recently, several works showed that, on very general random graphs models, GNNs converge to certains functions as the number of nodes grows.
arXiv Detail & Related papers (2023-05-24T07:09:53Z) - PGX: A Multi-level GNN Explanation Framework Based on Separate Knowledge
Distillation Processes [0.2005299372367689]
We propose a multi-level GNN explanation framework based on an observation that GNN is a multimodal learning process of multiple components in graph data.
The complexity of the original problem is relaxed by breaking into multiple sub-parts represented as a hierarchical structure.
We also aim for personalized explanations as the framework can generate different results based on user preferences.
arXiv Detail & Related papers (2022-08-05T10:14:48Z) - Hyperbolic Graph Neural Networks: A Review of Methods and Applications [55.5502008501764]
Graph neural networks generalize conventional neural networks to graph-structured data.
The performance of Euclidean models in graph-related learning is still bounded and limited by the representation ability of Euclidean geometry.
Recently, hyperbolic space has gained increasing popularity in processing graph data with tree-like structure and power-law distribution.
arXiv Detail & Related papers (2022-02-28T15:08:48Z) - Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural
Networks [13.518582483147325]
We provide a rigorous analysis of the performance of neural networks in the context of transductive inference.
We show that transductive Rademacher complexity can explain the generalisation properties of graph convolutional networks for block models.
arXiv Detail & Related papers (2021-12-07T20:06:23Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.