The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
field
- URL: http://arxiv.org/abs/2308.10051v1
- Date: Sat, 19 Aug 2023 15:21:12 GMT
- Title: The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
field
- Authors: Kun Wang, Guohao Li, Shilong Wang, Guibin Zhang, Kai Wang, Yang You,
Xiaojiang Peng, Yuxuan Liang, Yang Wang
- Abstract summary: We introduce the Snowflake Hypothesis -- a novel paradigm underpinning the concept of one node, one receptive field''
We employ the simplest gradient and node-level cosine distance as guiding principles to regulate the aggregation depth for each node.
The observational results demonstrate that our hypothesis can serve as a universal operator for a range of tasks.
- Score: 39.679151680622375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite Graph Neural Networks demonstrating considerable promise in graph
representation learning tasks, GNNs predominantly face significant issues with
over-fitting and over-smoothing as they go deeper as models of computer vision
realm. In this work, we conduct a systematic study of deeper GNN research
trajectories. Our findings indicate that the current success of deep GNNs
primarily stems from (I) the adoption of innovations from CNNs, such as
residual/skip connections, or (II) the tailor-made aggregation algorithms like
DropEdge. However, these algorithms often lack intrinsic interpretability and
indiscriminately treat all nodes within a given layer in a similar manner,
thereby failing to capture the nuanced differences among various nodes. To this
end, we introduce the Snowflake Hypothesis -- a novel paradigm underpinning the
concept of ``one node, one receptive field''. The hypothesis draws inspiration
from the unique and individualistic patterns of each snowflake, proposing a
corresponding uniqueness in the receptive fields of nodes in the GNNs.
We employ the simplest gradient and node-level cosine distance as guiding
principles to regulate the aggregation depth for each node, and conduct
comprehensive experiments including: (1) different training schemes; (2)
various shallow and deep GNN backbones, and (3) various numbers of layers (8,
16, 32, 64) on multiple benchmarks (six graphs including dense graphs with
millions of nodes); (4) compare with different aggregation strategies. The
observational results demonstrate that our hypothesis can serve as a universal
operator for a range of tasks, and it displays tremendous potential on deep
GNNs. It can be applied to various GNN frameworks, enhancing its effectiveness
when operating in-depth, and guiding the selection of the optimal network depth
in an explainable and generalizable way.
Related papers
- Learning Personalized Scoping for Graph Neural Networks under Heterophily [3.475704621679017]
Heterophilous graphs, where dissimilar nodes tend to connect, pose a challenge for graph neural networks (GNNs)
We formalize personalized scoping as a separate scope classification problem that overcomes GNN overfitting in node classification.
We propose Adaptive Scope (AS), a lightweight approach that only participates in GNN inference.
arXiv Detail & Related papers (2024-09-11T04:13:39Z) - The Heterophilic Snowflake Hypothesis: Training and Empowering GNNs for Heterophilic Graphs [59.03660013787925]
We introduce the Heterophily Snowflake Hypothesis and provide an effective solution to guide and facilitate research on heterophilic graphs.
Our observations show that our framework acts as a versatile operator for diverse tasks.
It can be integrated into various GNN frameworks, boosting performance in-depth and offering an explainable approach to choosing the optimal network depth.
arXiv Detail & Related papers (2024-06-18T12:16:00Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - How Expressive are Graph Neural Networks in Recommendation? [17.31401354442106]
Graph Neural Networks (GNNs) have demonstrated superior performance on various graph learning tasks, including recommendation.
Recent research has explored the expressiveness of GNNs in general, demonstrating that message passing GNNs are at most as powerful as the Weisfeiler-Lehman test.
We propose the topological closeness metric to evaluate GNNs' ability to capture the structural distance between nodes.
arXiv Detail & Related papers (2023-08-22T02:17:34Z) - On Generalized Degree Fairness in Graph Neural Networks [18.110053023118294]
We propose a novel GNN framework called Generalized Degree Fairness-centric Graph Neural Network (Deg-FairGNN)
Specifically, in each GNN layer, we employ a learnable debiasing function to generate debiasing contexts.
Extensive experiments on three benchmark datasets demonstrate the effectiveness of our model on both accuracy and fairness metrics.
arXiv Detail & Related papers (2023-02-08T05:00:37Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.