Label Inference Attacks against Node-level Vertical Federated GNNs
- URL: http://arxiv.org/abs/2308.02465v2
- Date: Thu, 18 Apr 2024 08:49:21 GMT
- Title: Label Inference Attacks against Node-level Vertical Federated GNNs
- Authors: Marco Arazzi, Mauro Conti, Stefanos Koffas, Marina Krcek, Antonino Nocera, Stjepan Picek, Jing Xu,
- Abstract summary: We investigate label inference attacks on Vertical Federated Learning (VFL) using a zero-background knowledge strategy.
Our proposed attack, BlindSage, provides impressive results in the experiments, achieving nearly 100% accuracy in most cases.
- Score: 26.80658307067889
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning enables collaborative training of machine learning models by keeping the raw data of the involved workers private. Three of its main objectives are to improve the models' privacy, security, and scalability. Vertical Federated Learning (VFL) offers an efficient cross-silo setting where a few parties collaboratively train a model without sharing the same features. In such a scenario, classification labels are commonly considered sensitive information held exclusively by one (active) party, while other (passive) parties use only their local information. Recent works have uncovered important flaws of VFL, leading to possible label inference attacks under the assumption that the attacker has some, even limited, background knowledge on the relation between labels and data. In this work, we are the first (to the best of our knowledge) to investigate label inference attacks on VFL using a zero-background knowledge strategy. To formulate our proposal, we focus on Graph Neural Networks (GNNs) as a target model for the underlying VFL. In particular, we refer to node classification tasks, which are widely studied, and GNNs have shown promising results. Our proposed attack, BlindSage, provides impressive results in the experiments, achieving nearly 100% accuracy in most cases. Even when the attacker has no information about the used architecture or the number of classes, the accuracy remains above 90% in most instances. Finally, we observe that well-known defenses cannot mitigate our attack without affecting the model's performance on the main classification task.
Related papers
- Decoupling the Class Label and the Target Concept in Machine Unlearning [81.69857244976123]
Machine unlearning aims to adjust a trained model to approximate a retrained one that excludes a portion of training data.
Previous studies showed that class-wise unlearning is successful in forgetting the knowledge of a target class.
We propose a general framework, namely, TARget-aware Forgetting (TARF)
arXiv Detail & Related papers (2024-06-12T14:53:30Z) - KDk: A Defense Mechanism Against Label Inference Attacks in Vertical Federated Learning [2.765106384328772]
In a Vertical Federated Learning (VFL) scenario, the labels of the samples are kept private from all the parties except for the aggregating server, that is the label owner.
Recent works discovered that by exploiting gradient information returned by the server to bottom models, an adversary can infer the private labels.
We propose a novel framework called KDk, that combines Knowledge Distillation and k-anonymity to provide a defense mechanism.
arXiv Detail & Related papers (2024-04-18T17:51:02Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Property inference attack; Graph neural networks; Privacy attacks and
defense; Trustworthy machine learning [5.598383724295497]
Machine learning models are vulnerable to privacy attacks that leak information about the training data.
In this work, we focus on a particular type of privacy attacks named property inference attack (PIA)
We consider Graph Neural Networks (GNNs) as the target model, and distribution of particular groups of nodes and links in the training graph as the target property.
arXiv Detail & Related papers (2022-09-02T14:59:37Z) - FL-Defender: Combating Targeted Attacks in Federated Learning [7.152674461313707]
Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers.
FL is vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model.
We propose textitFL-Defender as a method to combat FL targeted attacks.
arXiv Detail & Related papers (2022-07-02T16:04:46Z) - Defending Label Inference and Backdoor Attacks in Vertical Federated
Learning [11.319694528089773]
In collaborative learning, curious parities might be honest but are attempting to infer other parties' private data through inference attacks.
In this paper, we show that private labels can be reconstructed from per-sample gradients.
We introduce a novel technique termed confusional autoencoder (CoAE) based on autoencoder and entropy regularization.
arXiv Detail & Related papers (2021-12-10T09:32:09Z) - The Role of Global Labels in Few-Shot Classification and How to Infer
Them [55.64429518100676]
Few-shot learning is a central problem in meta-learning, where learners must quickly adapt to new tasks.
We propose Meta Label Learning (MeLa), a novel algorithm that infers global labels and obtains robust few-shot models via standard classification.
arXiv Detail & Related papers (2021-08-09T14:07:46Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.