Feature Overcorrelation in Deep Graph Neural Networks: A New Perspective
- URL: http://arxiv.org/abs/2206.07743v1
- Date: Wed, 15 Jun 2022 18:13:52 GMT
- Title: Feature Overcorrelation in Deep Graph Neural Networks: A New Perspective
- Authors: Wei Jin, Xiaorui Liu, Yao Ma, Charu Aggarwal, Jiliang Tang
- Abstract summary: Oversmoothing has been identified as one of the key issues which limit the performance of deep GNNs.
We propose a new perspective to look at the performance degradation of deep GNNs, i.e., feature overcorrelation.
To reduce the feature correlation, we propose a general framework DeCorr which can encourage GNNs to encode less redundant information.
- Score: 44.96635754139024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed remarkable success achieved by graph neural
networks (GNNs) in many real-world applications such as recommendation and drug
discovery. Despite the success, oversmoothing has been identified as one of the
key issues which limit the performance of deep GNNs. It indicates that the
learned node representations are highly indistinguishable due to the stacked
aggregators. In this paper, we propose a new perspective to look at the
performance degradation of deep GNNs, i.e., feature overcorrelation. Through
empirical and theoretical study on this matter, we demonstrate the existence of
feature overcorrelation in deeper GNNs and reveal potential reasons leading to
this issue. To reduce the feature correlation, we propose a general framework
DeCorr which can encourage GNNs to encode less redundant information. Extensive
experiments have demonstrated that DeCorr can help enable deeper GNNs and is
complementary to existing techniques tackling the oversmoothing issue.
Related papers
- When Graph Neural Network Meets Causality: Opportunities, Methodologies and An Outlook [23.45046265345568]
Graph Neural Networks (GNNs) have emerged as powerful representation learning tools for capturing complex dependencies within diverse graph-structured data.
GNNs have raised serious concerns regarding their trustworthiness, including susceptibility to distribution shift, biases towards certain populations, and lack of explainability.
Integrating causal learning techniques into GNNs has sparked numerous ground-breaking studies since many GNN trustworthiness issues can be alleviated.
arXiv Detail & Related papers (2023-12-19T13:26:14Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Bregman Graph Neural Network [27.64062763929748]
In node classification tasks, the smoothing effect induced by GNNs tends to assimilate representations and over-homogenize labels of connected nodes.
We propose a novel bilevel optimization framework for GNNs inspired by the notion of Bregman distance.
arXiv Detail & Related papers (2023-09-12T23:54:24Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - Towards Deep Attention in Graph Neural Networks: Problems and Remedies [15.36416000750147]
Graph neural networks (GNNs) learn the representation of graph-structured data, and their expressiveness can be enhanced by inferring node relations for propagation.
We investigate some problematic phenomena related to deep graph attention, including vulnerability to over-smoothed features and smooth cumulative attention.
Motivated by our findings, we propose AEROGNN, a novel GNN architecture designed for deep graph attention.
arXiv Detail & Related papers (2023-06-04T15:19:44Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Addressing Over-Smoothing in Graph Neural Networks via Deep Supervision [13.180922099929765]
Deep graph neural networks (GNNs) suffer from over-smoothing when the number of layers increases.
We propose DSGNNs enhanced with deep supervision where representations learned at all layers are used for training.
We show that DSGNNs are resilient to over-smoothing and can outperform competitive benchmarks on node and graph property prediction problems.
arXiv Detail & Related papers (2022-02-25T06:05:55Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.