Property inference attack; Graph neural networks; Privacy attacks and
defense; Trustworthy machine learning
- URL: http://arxiv.org/abs/2209.01100v1
- Date: Fri, 2 Sep 2022 14:59:37 GMT
- Title: Property inference attack; Graph neural networks; Privacy attacks and
defense; Trustworthy machine learning
- Authors: Xiuling Wang and Wendy Hui Wang
- Abstract summary: Machine learning models are vulnerable to privacy attacks that leak information about the training data.
In this work, we focus on a particular type of privacy attacks named property inference attack (PIA)
We consider Graph Neural Networks (GNNs) as the target model, and distribution of particular groups of nodes and links in the training graph as the target property.
- Score: 5.598383724295497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the fast adoption of machine learning (ML) techniques, sharing of ML
models is becoming popular. However, ML models are vulnerable to privacy
attacks that leak information about the training data. In this work, we focus
on a particular type of privacy attacks named property inference attack (PIA)
which infers the sensitive properties of the training data through the access
to the target ML model. In particular, we consider Graph Neural Networks (GNNs)
as the target model, and distribution of particular groups of nodes and links
in the training graph as the target property. While the existing work has
investigated PIAs that target at graph-level properties, no prior works have
studied the inference of node and link properties at group level yet.
In this work, we perform the first systematic study of group property
inference attacks (GPIA) against GNNs. First, we consider a taxonomy of threat
models under both black-box and white-box settings with various types of
adversary knowledge, and design six different attacks for these settings. We
evaluate the effectiveness of these attacks through extensive experiments on
three representative GNN models and three real-world graphs. Our results
demonstrate the effectiveness of these attacks whose accuracy outperforms the
baseline approaches. Second, we analyze the underlying factors that contribute
to GPIA's success, and show that the target model trained on the graphs with or
without the target property represents some dissimilarity in model parameters
and/or model outputs, which enables the adversary to infer the existence of the
property. Further, we design a set of defense mechanisms against the GPIA
attacks, and demonstrate that these mechanisms can reduce attack accuracy
effectively with small loss on GNN model accuracy.
Related papers
- Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Unveiling the potential of Graph Neural Networks for robust Intrusion
Detection [2.21481607673149]
We propose a novel Graph Neural Network (GNN) model to learn flow patterns of attacks structured as graphs.
Our model is able to maintain the same level of accuracy as in previous experiments, while state-of-the-art ML techniques degrade up to 50% their accuracy (F1-score) under adversarial attacks.
arXiv Detail & Related papers (2021-07-30T16:56:39Z) - Property Inference Attacks on Convolutional Neural Networks: Influence
and Implications of Target Model's Complexity [1.2891210250935143]
Property Inference Attacks aim to infer from a given model properties about the training dataset seemingly unrelated to the model's primary goal.
This paper investigates the influence of the target model's complexity on the accuracy of this type of attack.
Our findings reveal that the risk of a privacy breach is present independently of the target model's complexity.
arXiv Detail & Related papers (2021-04-27T09:19:36Z) - Node-Level Membership Inference Attacks Against Graph Neural Networks [29.442045622210532]
A new family of machine learning (ML) models, namely graph neural networks (GNNs), has been introduced.
Previous studies have shown that machine learning models are vulnerable to privacy attacks.
This paper performs the first comprehensive analysis of node-level membership inference attacks against GNNs.
arXiv Detail & Related papers (2021-02-10T13:51:54Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Model Extraction Attacks on Graph Neural Networks: Taxonomy and
Realization [40.37373934201329]
We investigate and develop model extraction attacks against GNN models.
We first formalise the threat modelling in the context of GNN model extraction.
We then present detailed methods which utilise the accessible knowledge in each threat to implement the attacks.
arXiv Detail & Related papers (2020-10-24T03:09:37Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.