Node-aware Bi-smoothing: Certified Robustness against Graph Injection
Attacks
- URL: http://arxiv.org/abs/2312.03979v1
- Date: Thu, 7 Dec 2023 01:24:48 GMT
- Title: Node-aware Bi-smoothing: Certified Robustness against Graph Injection
Attacks
- Authors: Yuni Lai, Yulin Zhu, Bailin Pan, Kai Zhou
- Abstract summary: Deep Graph Learning (DGL) has emerged as a crucial technique across various domains.
Recent studies have exposed vulnerabilities in DGL models, such as susceptibility to evasion and poisoning attacks.
We introduce the node-aware bi-smoothing framework, which is the first certifiably robust approach for general node classification tasks against GIAs.
- Score: 5.660584039688214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Graph Learning (DGL) has emerged as a crucial technique across various
domains. However, recent studies have exposed vulnerabilities in DGL models,
such as susceptibility to evasion and poisoning attacks. While empirical and
provable robustness techniques have been developed to defend against graph
modification attacks (GMAs), the problem of certified robustness against graph
injection attacks (GIAs) remains largely unexplored. To bridge this gap, we
introduce the node-aware bi-smoothing framework, which is the first certifiably
robust approach for general node classification tasks against GIAs. Notably,
the proposed node-aware bi-smoothing scheme is model-agnostic and is applicable
for both evasion and poisoning attacks. Through rigorous theoretical analysis,
we establish the certifiable conditions of our smoothing scheme. We also
explore the practical implications of our node-aware bi-smoothing schemes in
two contexts: as an empirical defense approach against real-world GIAs and in
the context of recommendation systems. Furthermore, we extend two
state-of-the-art certified robustness frameworks to address node injection
attacks and compare our approach against them. Extensive evaluations
demonstrate the effectiveness of our proposed certificates.
Related papers
- Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - Homophily-Driven Sanitation View for Robust Graph Contrastive Learning [28.978770069310276]
We investigate adversarial robustness of unsupervised Graph Contrastive Learning (GCL) against structural attacks.
We present a robust GCL framework that integrates a homophily-driven sanitation view, which can be learned jointly with contrastive learning.
We conduct extensive experiments to evaluate the performance of our proposed model, GCHS, against two state of the art structural attacks on GCL.
arXiv Detail & Related papers (2023-07-24T06:41:59Z) - Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph
Neural Networks via Reinforcement Learning [37.4570186471298]
We study the problem of black-box node injection attack, without training a potentially misleading surrogate model.
By directly querying the victim model, G2A2C learns to inject highly malicious nodes with extremely limited attacking budgets.
We demonstrate the superior performance of our proposed G2A2C over the existing state-of-the-art attackers.
arXiv Detail & Related papers (2022-11-19T19:37:22Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - SoK: Certified Robustness for Deep Neural Networks [13.10665264010575]
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
In this paper, we systematize certifiably robust approaches and related practical and theoretical implications.
We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets.
arXiv Detail & Related papers (2020-09-09T07:00:55Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.