Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach
- URL: http://arxiv.org/abs/2310.06396v1
- Date: Tue, 10 Oct 2023 07:59:23 GMT
- Title: Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach
- Authors: Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, Wee Peng Tay
- Abstract summary: Graph neural networks (GNNs) are vulnerable to adversarial perturbations.
This paper investigates GNNs derived from diverse neural flows.
We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness.
- Score: 27.99849885813841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are vulnerable to adversarial perturbations,
including those that affect both node features and graph topology. This paper
investigates GNNs derived from diverse neural flows, concentrating on their
connection to various stability notions such as BIBO stability, Lyapunov
stability, structural stability, and conservative stability. We argue that
Lyapunov stability, despite its common use, does not necessarily ensure
adversarial robustness. Inspired by physics principles, we advocate for the use
of conservative Hamiltonian neural flows to construct GNNs that are robust to
adversarial attacks. The adversarial robustness of different neural flow GNNs
is empirically compared on several benchmark datasets under a variety of
adversarial attacks. Extensive numerical experiments demonstrate that GNNs
leveraging conservative Hamiltonian flows with Lyapunov stability substantially
improve robustness against adversarial perturbations. The implementation code
of experiments is available at
https://github.com/zknus/NeurIPS-2023-HANG-Robustness.
Related papers
- Robust Stable Spiking Neural Networks [45.84535743722043]
Spiking neural networks (SNNs) are gaining popularity in deep learning due to their low energy budget on neuromorphic hardware.
Many studies have been conducted to defend SNNs from the threat of adversarial attacks.
This paper aims to uncover the robustness of SNN through the lens of the stability of nonlinear systems.
arXiv Detail & Related papers (2024-05-31T08:40:02Z) - Robust Graph Neural Networks via Unbiased Aggregation [20.40814320483077]
adversarial robustness of Graph Neural Networks (GNNs) has been questioned due to the false sense of security uncovered by strong adaptive attacks.
We provide a unified robust estimation point of view to understand their robustness and limitations.
arXiv Detail & Related papers (2023-11-25T05:34:36Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending
Against Adversarial Attacks [32.88499015927756]
We propose a stable neural ODE with Lyapunov-stable equilibrium points for defending against adversarial attacks (SODEF)
We provide theoretical results that give insights into the stability of SODEF as well as the choice of regularizers to ensure its stability.
arXiv Detail & Related papers (2021-10-25T14:09:45Z) - Stability of Neural Networks on Manifolds to Relative Perturbations [118.84154142918214]
Graph Neural Networks (GNNs) show impressive performance in many practical scenarios.
GNNs can scale well on large size graphs, but this is contradicted by the fact that existing stability bounds grow with the number of nodes.
arXiv Detail & Related papers (2021-10-10T04:37:19Z) - Training Stable Graph Neural Networks Through Constrained Learning [116.03137405192356]
Graph Neural Networks (GNNs) rely on graph convolutions to learn features from network data.
GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters.
We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice.
arXiv Detail & Related papers (2021-10-07T15:54:42Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Securing Deep Spiking Neural Networks against Adversarial Attacks
through Inherent Structural Parameters [11.665517294899724]
This paper explores the security enhancement of Spiking Neural Networks (SNNs) through internal structural parameters.
To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks.
arXiv Detail & Related papers (2020-12-09T21:09:03Z) - Batch Normalization Increases Adversarial Vulnerability and Decreases
Adversarial Transferability: A Non-Robust Feature Perspective [91.5105021619887]
Batch normalization (BN) has been widely used in modern deep neural networks (DNNs)
BN is observed to increase the model accuracy while at the cost of adversarial robustness.
It remains unclear whether BN mainly favors learning robust features (RFs) or non-robust features (NRFs)
arXiv Detail & Related papers (2020-10-07T10:24:33Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.