Active Membership Inference Attack under Local Differential Privacy in
Federated Learning
- URL: http://arxiv.org/abs/2302.12685v2
- Date: Mon, 24 Jul 2023 21:11:55 GMT
- Title: Active Membership Inference Attack under Local Differential Privacy in
Federated Learning
- Authors: Truc Nguyen, Phung Lai, Khang Tran, NhatHai Phan, My T. Thai
- Abstract summary: Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection.
We propose a new active membership inference (AMI) attack carried out by a dishonest server in FL.
- Score: 18.017082794703555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) was originally regarded as a framework for
collaborative learning among clients with data privacy protection through a
coordinating server. In this paper, we propose a new active membership
inference (AMI) attack carried out by a dishonest server in FL. In AMI attacks,
the server crafts and embeds malicious parameters into global models to
effectively infer whether a target data sample is included in a client's
private training data or not. By exploiting the correlation among data features
through a non-linear decision boundary, AMI attacks with a certified guarantee
of success can achieve severely high success rates under rigorous local
differential privacy (LDP) protection; thereby exposing clients' training data
to significant privacy risk. Theoretical and experimental results on several
benchmark datasets show that adding sufficient privacy-preserving noise to
prevent our attack would significantly damage FL's model utility.
Related papers
- Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Membership Information Leakage in Federated Contrastive Learning [7.822625013699216]
Federated Contrastive Learning (FCL) represents a burgeoning approach for learning from decentralized unlabeled data.
FCL is susceptible to privacy risks, such as membership information leakage, stemming from its distributed nature.
This study delves into the feasibility of executing a membership inference attack on FCL and proposes a robust attack methodology.
arXiv Detail & Related papers (2024-03-06T19:53:25Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Byzantine-Robust Federated Learning with Variance Reduction and
Differential Privacy [6.343100139647636]
Federated learning (FL) is designed to preserve data privacy during model training.
FL is vulnerable to privacy attacks and Byzantine attacks.
We propose a new FL scheme that guarantees rigorous privacy and simultaneously enhances system robustness against Byzantine attacks.
arXiv Detail & Related papers (2023-09-07T01:39:02Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - A New Implementation of Federated Learning for Privacy and Security
Enhancement [27.612480082254486]
Federated learning (FL) has emerged as a new machine learning setting.
No local data needs to be shared, and privacy can be well protected.
We propose a model update based federated averaging algorithm to defend against Byzantine attacks.
arXiv Detail & Related papers (2022-08-03T03:13:19Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.