On the Ability of Graph Neural Networks to Model Interactions Between
Vertices
- URL: http://arxiv.org/abs/2211.16494v5
- Date: Mon, 23 Oct 2023 12:02:34 GMT
- Title: On the Ability of Graph Neural Networks to Model Interactions Between
Vertices
- Authors: Noam Razin, Tom Verbin, Nadav Cohen
- Abstract summary: Graph neural networks (GNNs) are widely used for modeling complex interactions between entities represented as vertices of a graph.
Despite recent efforts to theoretically analyze the expressive power of GNNs, a formal characterization of their ability to model interactions is lacking.
- Score: 14.909298522361306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are widely used for modeling complex
interactions between entities represented as vertices of a graph. Despite
recent efforts to theoretically analyze the expressive power of GNNs, a formal
characterization of their ability to model interactions is lacking. The current
paper aims to address this gap. Formalizing strength of interactions through an
established measure known as separation rank, we quantify the ability of
certain GNNs to model interaction between a given subset of vertices and its
complement, i.e. between the sides of a given partition of input vertices. Our
results reveal that the ability to model interaction is primarily determined by
the partition's walk index -- a graph-theoretical characteristic defined by the
number of walks originating from the boundary of the partition. Experiments
with common GNN architectures corroborate this finding. As a practical
application of our theory, we design an edge sparsification algorithm named
Walk Index Sparsification (WIS), which preserves the ability of a GNN to model
interactions when input edges are removed. WIS is simple, computationally
efficient, and in our experiments has markedly outperformed alternative methods
in terms of induced prediction accuracy. More broadly, it showcases the
potential of improving GNNs by theoretically analyzing the interactions they
can model.
Related papers
- GraphGI:A GNN Explanation Method using Game Interaction [5.149896909638598]
Graph Neural Networks (GNNs) have garnered significant attention and have been extensively utilized across various domains.
Current graph explanation techniques focus on identifying key nodes or edges, attributing the critical data features that drive model predictions.
We propose a novel explanatory method GraphGI, which identifies the coalition with the highest interaction strength and presents it as an explanatory subgraph.
arXiv Detail & Related papers (2024-09-24T03:24:31Z) - Learning Coarse-Grained Dynamics on Graph [4.692217705215042]
We consider a Graph Neural Network (GNN) non-Markovian modeling framework to identify coarse-grained dynamical systems on graphs.
Our main idea is to systematically determine the GNN architecture by inspecting how the leading term of the Mori-Zwanzig memory term depends on the coarse-grained interaction coefficients that encode the graph topology.
arXiv Detail & Related papers (2024-05-15T13:25:34Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Edge-Level Explanations for Graph Neural Networks by Extending
Explainability Methods for Convolutional Neural Networks [33.20913249848369]
Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction.
We extend explainability methods for CNNs, such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs.
The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the
arXiv Detail & Related papers (2021-11-01T06:27:29Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.