Randomized Message-Interception Smoothing: Gray-box Certificates for
Graph Neural Networks
- URL: http://arxiv.org/abs/2301.02039v1
- Date: Thu, 5 Jan 2023 12:21:48 GMT
- Title: Randomized Message-Interception Smoothing: Gray-box Certificates for
Graph Neural Networks
- Authors: Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski,
Stephan G\"unnemann
- Abstract summary: We propose novel gray-box certificates for Graph Neural Networks (GNNs)
We randomly intercept messages and analyze the probability that messages from adversarially controlled nodes reach their target nodes.
Our certificates provide stronger guarantees for attacks at larger distances.
- Score: 68.4543263023324
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Randomized smoothing is one of the most promising frameworks for certifying
the adversarial robustness of machine learning models, including Graph Neural
Networks (GNNs). Yet, existing randomized smoothing certificates for GNNs are
overly pessimistic since they treat the model as a black box, ignoring the
underlying architecture. To remedy this, we propose novel gray-box certificates
that exploit the message-passing principle of GNNs: We randomly intercept
messages and carefully analyze the probability that messages from adversarially
controlled nodes reach their target nodes. Compared to existing certificates,
we certify robustness to much stronger adversaries that control entire nodes in
the graph and can arbitrarily manipulate node features. Our certificates
provide stronger guarantees for attacks at larger distances, as messages from
farther-away nodes are more likely to get intercepted. We demonstrate the
effectiveness of our method on various models and datasets. Since our gray-box
certificates consider the underlying graph structure, we can significantly
improve certifiable robustness by applying graph sparsification.
Related papers
- Provable Robustness of (Graph) Neural Networks Against Data Poisoning and Backdoor Attacks [50.87615167799367]
We certify Graph Neural Networks (GNNs) against poisoning attacks, including backdoors, targeting the node features of a given graph.
Our framework provides fundamental insights into the role of graph structure and its connectivity on the worst-case behavior of convolution-based and PageRank-based GNNs.
arXiv Detail & Related papers (2024-07-15T16:12:51Z) - ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - Learning Scalable Structural Representations for Link Prediction with
Bloom Signatures [39.63963077346406]
Graph neural networks (GNNs) are known to perform sub-optimally on link prediction tasks.
We propose to learn structural link representations by augmenting the message-passing framework of GNNs with Bloom signatures.
Our proposed model achieves comparable or better performance than existing edge-wise GNN models.
arXiv Detail & Related papers (2023-12-28T02:21:40Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Sparse Vicious Attacks on Graph Neural Networks [3.246307337376473]
This work focuses on a specific, white-box attack to GNN-based link prediction models.
We propose SAVAGE, a novel framework and a method to mount this type of link prediction attacks.
Experiments conducted on real-world and synthetic datasets demonstrate that adversarial attacks implemented through SAVAGE indeed achieve high attack success rate.
arXiv Detail & Related papers (2022-09-20T12:51:24Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Black-box Node Injection Attack for Graph Neural Networks [29.88729779937473]
We study the possibility of injecting nodes to evade the victim GNN model.
Specifically, we propose GA2C, a graph reinforcement learning framework.
We demonstrate the superior performance of our proposed GA2C over existing state-of-the-art methods.
arXiv Detail & Related papers (2022-02-18T19:17:43Z) - Efficient Robustness Certificates for Discrete Data: Sparsity-Aware
Randomized Smoothing for Graphs, Images and More [85.52940587312256]
We propose a model-agnostic certificate based on the randomized smoothing framework which subsumes earlier work and is tight, efficient, and sparsity-aware.
We show the effectiveness of our approach on a wide variety of models, datasets, and tasks -- specifically highlighting its use for Graph Neural Networks.
arXiv Detail & Related papers (2020-08-29T10:09:02Z) - Learning Node Representations against Perturbations [21.66982904572156]
Recent graph neural networks (GNN) has achieved remarkable performance in node representation learning.
We study how to learn node representations against perturbations in GNN.
We propose Stability-Identifiability GNN Against Perturbations (SIGNNAP) that learns reliable node representations in an unsupervised manner.
arXiv Detail & Related papers (2020-08-26T07:11:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.