Robust Graph Neural Networks via Unbiased Aggregation
- URL: http://arxiv.org/abs/2311.14934v2
- Date: Sat, 09 Nov 2024 08:07:42 GMT
- Title: Robust Graph Neural Networks via Unbiased Aggregation
- Authors: Zhichao Hou, Ruiqi Feng, Tyler Derr, Xiaorui Liu,
- Abstract summary: adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks.
We provide a unified robust estimation point of view to understand their robustness and limitations.
- Score: 18.681451049083407
- License:
- Abstract: The adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks despite the existence of numerous defenses. In this work, we delve into the robustness analysis of representative robust GNNs and provide a unified robust estimation point of view to understand their robustness and limitations. Our novel analysis of estimation bias motivates the design of a robust and unbiased graph signal estimator. We then develop an efficient Quasi-Newton Iterative Reweighted Least Squares algorithm to solve the estimation problem, which is unfolded as robust unbiased aggregation layers in GNNs with theoretical guarantees. Our comprehensive experiments confirm the strong robustness of our proposed model under various scenarios, and the ablation study provides a deep understanding of its advantages. Our code is available at https://github.com/chris-hzc/RUNG.
Related papers
- Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Explainable AI Security: Exploring Robustness of Graph Neural Networks to Adversarial Attacks [14.89001880258583]
Graph neural networks (GNNs) have achieved tremendous success, but recent studies have shown that GNNs are vulnerable to adversarial attacks.
We investigate the adversarial robustness of GNNs by considering graph data patterns, model-specific factors, and the transferability of adversarial examples.
This work illuminates the vulnerabilities of GNNs and opens many promising avenues for designing robust GNNs.
arXiv Detail & Related papers (2024-06-20T01:24:18Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - A Simple and Yet Fairly Effective Defense for Graph Neural Networks [18.140756786259615]
Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data.
Existing defense methods against small adversarial perturbations suffer from high time complexity.
This paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture.
arXiv Detail & Related papers (2024-02-21T18:16:48Z) - Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach [27.99849885813841]
Graph neural networks (GNNs) are vulnerable to adversarial perturbations.
This paper investigates GNNs derived from diverse neural flows.
We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness.
arXiv Detail & Related papers (2023-10-10T07:59:23Z) - Can Directed Graph Neural Networks be Adversarially Robust? [26.376780541893154]
This study aims to harness the profound trust implications offered by directed graphs to bolster the robustness and resilience of Graph Neural Networks (GNNs)
We introduce a new and realistic directed graph attack setting and propose an innovative, universal, and efficient message-passing framework as a plug-in layer.
This framework outstanding clean accuracy and state-of-the-art robust performance, offering superior defense against both transfer and adaptive attacks.
arXiv Detail & Related papers (2023-06-03T04:56:04Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Recent Advances in Understanding Adversarial Robustness of Deep Neural
Networks [15.217367754000913]
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.
We give preliminary definitions on what adversarial attacks and robustness are.
We study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness.
arXiv Detail & Related papers (2020-11-03T07:42:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.