Degree-based stratification of nodes in Graph Neural Networks
- URL: http://arxiv.org/abs/2312.10458v1
- Date: Sat, 16 Dec 2023 14:09:23 GMT
- Title: Degree-based stratification of nodes in Graph Neural Networks
- Authors: Ameen Ali, Hakan Cevikalp, Lior Wolf
- Abstract summary: We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
- Score: 66.17149106033126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite much research, Graph Neural Networks (GNNs) still do not display the
favorable scaling properties of other deep neural networks such as
Convolutional Neural Networks and Transformers. Previous work has identified
issues such as oversmoothing of the latent representation and have suggested
solutions such as skip connections and sophisticated normalization schemes.
Here, we propose a different approach that is based on a stratification of the
graph nodes. We provide motivation that the nodes in a graph can be stratified
into those with a low degree and those with a high degree and that the two
groups are likely to behave differently. Based on this motivation, we modify
the Graph Neural Network (GNN) architecture so that the weight matrices are
learned, separately, for the nodes in each group. This simple-to-implement
modification seems to improve performance across datasets and GNN methods. To
verify that this increase in performance is not only due to the added capacity,
we also perform the same modification for random splits of the nodes, which
does not lead to any improvement.
Related papers
- Transfer Entropy in Graph Convolutional Neural Networks [0.0]
Graph Convolutional Networks (GCN) are Graph Neural Networks where the convolutions are applied over a graph.
In this study, we address two important challenges related to GCNs: i.
Oversmoothing is the degradation of the discriminative capacity of nodes as a result of repeated aggregations.
We propose a new strategy for addressing these challenges in GCNs based on Transfer Entropy (TE), which measures of the amount of directed transfer of information between two time varying nodes.
arXiv Detail & Related papers (2024-06-08T20:09:17Z) - Collaborative Graph Neural Networks for Attributed Network Embedding [63.39495932900291]
Graph neural networks (GNNs) have shown prominent performance on attributed network embedding.
We propose COllaborative graph Neural Networks--CONN, a tailored GNN architecture for network embedding.
arXiv Detail & Related papers (2023-07-22T04:52:27Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - SEA: Graph Shell Attention in Graph Neural Networks [8.565134944225491]
A common issue in Graph Neural Networks (GNNs) is known as over-smoothing.
In our work, we relax the GNN architecture by means of implementing a routing. Specifically, the nodes' representations are routed to dedicated experts.
We call this procedure Graph Shell Attention (SEA), where experts process different subgraphs in a transformer-motivated fashion.
arXiv Detail & Related papers (2021-10-20T17:32:08Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Position-based Hash Embeddings For Scaling Graph Neural Networks [8.87527266373087]
Graph Neural Networks (GNNs) compute node representations by taking into account the topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer to compute node embeddings and use them as input features.
To reduce the memory associated with this embedding layer, hashing-based approaches, commonly used in applications like NLP and recommender systems, can potentially be used.
We present approaches that take advantage of the nodes' position in the graph to dramatically reduce the memory required.
arXiv Detail & Related papers (2021-08-31T22:42:25Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Graph Neural Networks: Architectures, Stability and Transferability [176.3960927323358]
Graph Neural Networks (GNNs) are information processing architectures for signals supported on graphs.
They are generalizations of convolutional neural networks (CNNs) in which individual layers contain banks of graph convolutional filters.
arXiv Detail & Related papers (2020-08-04T18:57:36Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.