Sheaf Neural Networks with Connection Laplacians
- URL: http://arxiv.org/abs/2206.08702v1
- Date: Fri, 17 Jun 2022 11:39:52 GMT
- Title: Sheaf Neural Networks with Connection Laplacians
- Authors: Federico Barbero, Cristian Bodnar, Haitz S\'aez de Oc\'ariz Borde,
Michael Bronstein, Petar Veli\v{c}kovi\'c, Pietro Li\`o
- Abstract summary: Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that operates on a sheaf, an object that equips a graph with vector spaces over its nodes and edges and linear maps between these spaces.
Previous works proposed two diametrically opposed approaches: manually constructing the sheaf based on domain knowledge and learning the sheaf end-to-end using gradient-based methods.
In this work, we propose a novel way of computing sheaves drawing inspiration from Riemannian geometry.
We show that this approach achieves promising results with less computational overhead when compared to previous SNN models.
- Score: 3.3414557160889076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Sheaf Neural Network (SNN) is a type of Graph Neural Network (GNN) that
operates on a sheaf, an object that equips a graph with vector spaces over its
nodes and edges and linear maps between these spaces. SNNs have been shown to
have useful theoretical properties that help tackle issues arising from
heterophily and over-smoothing. One complication intrinsic to these models is
finding a good sheaf for the task to be solved. Previous works proposed two
diametrically opposed approaches: manually constructing the sheaf based on
domain knowledge and learning the sheaf end-to-end using gradient-based
methods. However, domain knowledge is often insufficient, while learning a
sheaf could lead to overfitting and significant computational overhead. In this
work, we propose a novel way of computing sheaves drawing inspiration from
Riemannian geometry: we leverage the manifold assumption to compute
manifold-and-graph-aware orthogonal maps, which optimally align the tangent
spaces of neighbouring data points. We show that this approach achieves
promising results with less computational overhead when compared to previous
SNN models. Overall, this work provides an interesting connection between
algebraic topology and differential geometry, and we hope that it will spark
future research in this direction.
Related papers
- Joint Diffusion Processes as an Inductive Bias in Sheaf Neural Networks [14.224234978509026]
Sheaf Neural Networks (SNNs) naturally extend Graph Neural Networks (GNNs)
We propose two novel sheaf learning approaches that provide a more intuitive understanding of the involved structure maps.
In our evaluation, we show the limitations of the real-world benchmarks used so far on SNNs.
arXiv Detail & Related papers (2024-07-30T07:17:46Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - Rewiring Networks for Graph Neural Network Training Using Discrete
Geometry [0.0]
Information over-squashing is a problem that significantly impacts the training of graph neural networks (GNNs)
In this paper, we investigate the use of discrete analogues of classical geometric notions of curvature to model information flow on networks and rewire them.
We show that these classical notions achieve state-of-the-art performance in GNN training accuracy on a variety of real-world network datasets.
arXiv Detail & Related papers (2022-07-16T21:50:39Z) - Tuning the Geometry of Graph Neural Networks [0.7614628596146599]
spatial graph convolution operators have been heralded as key to the success of Graph Neural Networks (GNNs)
We show that this aggregation operator is in fact tunable, and explicit regimes in which certain choices of operators -- and therefore, embedding geometries -- might be more appropriate.
arXiv Detail & Related papers (2022-07-12T23:28:03Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Fusing the Old with the New: Learning Relative Camera Pose with
Geometry-Guided Uncertainty [91.0564497403256]
We present a novel framework that involves probabilistic fusion between the two families of predictions during network training.
Our network features a self-attention graph neural network, which drives the learning by enforcing strong interactions between different correspondences.
We propose motion parmeterizations suitable for learning and show that our method achieves state-of-the-art performance on the challenging DeMoN and ScanNet datasets.
arXiv Detail & Related papers (2021-04-16T17:59:06Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.