Improving Expressivity of Graph Neural Networks
- URL: http://arxiv.org/abs/2004.05994v1
- Date: Wed, 8 Apr 2020 17:24:58 GMT
- Title: Improving Expressivity of Graph Neural Networks
- Authors: Stanis{\l}aw Purga{\l}
- Abstract summary: We propose a Graph Neural Network with greater expressive power than commonly used GNNs.
We use a graph attention network with expanding attention window that aggregates information from nodes exponentially far away.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a Graph Neural Network with greater expressive power than commonly
used GNNs - not constrained to only differentiate between graphs that
Weisfeiler-Lehman test recognizes to be non-isomorphic. We use a graph
attention network with expanding attention window that aggregates information
from nodes exponentially far away. We also use partially random initial
embeddings, allowing differentiation between nodes that would otherwise look
the same. This could cause problem with a traditional dropout mechanism,
therefore we use a "head dropout", randomly ignoring some attention heads
rather than some dimensions of the embedding.
Related papers
- Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Self-attention Dual Embedding for Graphs with Heterophily [6.803108335002346]
A number of real-world graphs are heterophilic, and this leads to much lower classification accuracy using standard GNNs.
We design a novel GNN which is effective for both heterophilic and homophilic graphs.
We evaluate our algorithm on real-world graphs containing thousands to millions of nodes and show that we achieve state-of-the-art results.
arXiv Detail & Related papers (2023-05-28T09:38:28Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Graph Attention Retrospective [14.52271219759284]
Graph-based learning is a rapidly growing sub-field of machine learning with applications in social networks, citation networks, and bioinformatics.
In this paper, we theoretically study the behaviour of graph attention networks.
We show that in an "easy" regime, where the distance between the means of the Gaussians is large enough, graph attention is able to distinguish inter-class from intra-class edges.
In the "hard" regime, we show that every attention mechanism fails to distinguish intra-class from inter-class edges.
arXiv Detail & Related papers (2022-02-26T04:58:36Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Graph Decoupling Attention Markov Networks for Semi-supervised Graph
Node Classification [38.52231889960877]
Graph neural networks (GNN) have been ubiquitous in graph learning tasks such as node classification.
In this paper, we consider the label dependency of graph nodes and propose a decoupling attention mechanism to learn both hard and soft attention.
arXiv Detail & Related papers (2021-04-28T11:44:13Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Natural Graph Networks [80.77570956520482]
We show that the more general concept of naturality is sufficient for a graph network to be well-defined.
We define global and local natural graph networks, the latter of which are as scalable as conventional message passing graph neural networks.
arXiv Detail & Related papers (2020-07-16T14:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.