DiffWire: Inductive Graph Rewiring via the Lov\'asz Bound
- URL: http://arxiv.org/abs/2206.07369v1
- Date: Wed, 15 Jun 2022 08:22:07 GMT
- Title: DiffWire: Inductive Graph Rewiring via the Lov\'asz Bound
- Authors: Adri\'an Arnaiz-Rodr\'iguez, Ahmed Begga, Francisco Escolano, Nuria
Oliver
- Abstract summary: Graph Neural Networks (GNNs) have been shown to achieve competitive results to tackle graph-related tasks.
MPNNs have been reported to suffer from over-smoothing, over-squashing and under-reaching.
We propose DiffWire, a novel framework for graph rewiring in MPNNs that is principled, fully differentiable and parameter-free.
- Score: 1.0323063834827415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have been shown to achieve competitive results
to tackle graph-related tasks, such as node and graph classification, link
prediction and node and graph clustering in a variety of domains. Most GNNs use
a message passing framework and hence are called MPNNs. Despite their promising
results, MPNNs have been reported to suffer from over-smoothing, over-squashing
and under-reaching. Graph rewiring and graph pooling have been proposed in the
literature as solutions to address these limitations. However, most
state-of-the-art graph rewiring methods fail to preserve the global topology of
the graph, are not differentiable (inductive) and require the tuning of
hyper-parameters. In this paper, we propose DiffWire, a novel framework for
graph rewiring in MPNNs that is principled, fully differentiable and
parameter-free by leveraging the Lov\'asz bound. Our approach provides a
unified theory for graph rewiring by proposing two new, complementary layers in
MPNNs: first, CTLayer, a layer that learns the commute times and uses them as a
relevance function for edge re-weighting; second, GAPLayer, a layer to optimize
the spectral gap, depending on the nature of the network and the task at hand.
We empirically validate the value of our proposed approach and each of these
layers separately with benchmark datasets for graph classification. DiffWire
brings together the learnability of commute times to related definitions of
curvature, opening the door to the development of more expressive MPNNs.
Related papers
- Improving Graph Neural Networks by Learning Continuous Edge Directions [0.0]
Graph Neural Networks (GNNs) traditionally employ a message-passing mechanism that resembles diffusion over undirected graphs.
Our key insight is to assign fuzzy edge directions to the edges of a graph so that features can preferentially flow in one direction between nodes.
We propose a general framework, called Continuous Edge Direction (CoED) GNN, for learning on graphs with fuzzy edges.
arXiv Detail & Related papers (2024-10-18T01:34:35Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - KGNN: Harnessing Kernel-based Networks for Semi-supervised Graph
Classification [13.419578861488226]
We propose a Kernel-based Graph Neural Network (KGNN) for semi-supervised graph classification.
We show that KGNN achieves impressive performance over competitive baselines.
arXiv Detail & Related papers (2022-05-21T10:03:46Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.