Revisiting Robustness in Graph Machine Learning
- URL: http://arxiv.org/abs/2305.00851v2
- Date: Tue, 2 May 2023 08:12:34 GMT
- Title: Revisiting Robustness in Graph Machine Learning
- Authors: Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan G\"unnemann
- Abstract summary: Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.
We introduce a more principled notion of an adversarial graph, which is aware of semantic content change.
We find that including the label-structure of the training graph into the inference process of GNNs significantly reduces over-robustness.
- Score: 1.5293427903448025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many works show that node-level predictions of Graph Neural Networks (GNNs)
are unrobust to small, often termed adversarial, changes to the graph
structure. However, because manual inspection of a graph is difficult, it is
unclear if the studied perturbations always preserve a core assumption of
adversarial examples: that of unchanged semantic content. To address this
problem, we introduce a more principled notion of an adversarial graph, which
is aware of semantic content change. Using Contextual Stochastic Block Models
(CSBMs) and real-world graphs, our results uncover: $i)$ for a majority of
nodes the prevalent perturbation models include a large fraction of perturbed
graphs violating the unchanged semantics assumption; $ii)$ surprisingly, all
assessed GNNs show over-robustness - that is robustness beyond the point of
semantic change. We find this to be a complementary phenomenon to adversarial
examples and show that including the label-structure of the training graph into
the inference process of GNNs significantly reduces over-robustness, while
having a positive effect on test accuracy and adversarial robustness.
Theoretically, leveraging our new semantics-aware notion of robustness, we
prove that there is no robustness-accuracy tradeoff for inductively classifying
a newly added node.
Related papers
- Oversmoothing as Loss of Sign: Towards Structural Balance in Graph Neural Networks [54.62268052283014]
Oversmoothing is a common issue in graph neural networks (GNNs)
Three major classes of anti-oversmoothing techniques can be mathematically interpreted as message passing over signed graphs.
Negative edges can repel nodes to a certain extent, providing deeper insights into how these methods mitigate oversmoothing.
arXiv Detail & Related papers (2025-02-17T03:25:36Z) - Robustness of Graph Classification: failure modes, causes, and noise-resistant loss in Graph Neural Networks [18.556227061863904]
Graph Neural Networks (GNNs) are powerful at solving graph classification tasks, yet applied problems often contain noisy labels.
We study GNN robustness to label noise, demonstrate GNN failure modes when models struggle to generalise on low-order graphs.
arXiv Detail & Related papers (2024-12-11T14:35:37Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Implicit Graph Neural Networks [46.0589136729616]
We propose a graph learning framework called Implicit Graph Neural Networks (IGNN)
IGNNs consistently capture long-range dependencies and outperform state-of-the-art GNN models.
arXiv Detail & Related papers (2020-09-14T06:04:55Z) - Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction [69.1473775184952]
We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
arXiv Detail & Related papers (2020-06-11T17:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.