Evaluating Membership Inference Attacks and Defenses in Federated
Learning
- URL: http://arxiv.org/abs/2402.06289v1
- Date: Fri, 9 Feb 2024 09:58:35 GMT
- Title: Evaluating Membership Inference Attacks and Defenses in Federated
Learning
- Authors: Gongxi Zhu, Donghao Li, Hanlin Gu, Yuxing Han, Yuan Yao, Lixin Fan,
Qiang Yang
- Abstract summary: Membership Inference Attacks (MIAs) pose a growing threat to privacy preservation in federated learning.
This paper conducts an evaluation of existing MIAs and corresponding defense strategies.
- Score: 23.080346952364884
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Membership Inference Attacks (MIAs) pose a growing threat to privacy
preservation in federated learning. The semi-honest attacker, e.g., the server,
may determine whether a particular sample belongs to a target client according
to the observed model information. This paper conducts an evaluation of
existing MIAs and corresponding defense strategies. Our evaluation on MIAs
reveals two important findings about the trend of MIAs. Firstly, combining
model information from multiple communication rounds (Multi-temporal) enhances
the overall effectiveness of MIAs compared to utilizing model information from
a single epoch. Secondly, incorporating models from non-target clients
(Multi-spatial) significantly improves the effectiveness of MIAs, particularly
when the clients' data is homogeneous. This highlights the importance of
considering the temporal and spatial model information in MIAs. Next, we assess
the effectiveness via privacy-utility tradeoff for two type defense mechanisms
against MIAs: Gradient Perturbation and Data Replacement. Our results
demonstrate that Data Replacement mechanisms achieve a more optimal balance
between preserving privacy and maintaining model utility. Therefore, we
recommend the adoption of Data Replacement methods as a defense strategy
against MIAs. Our code is available in https://github.com/Liar-Mask/FedMIA.
Related papers
- Dual-Model Defense: Safeguarding Diffusion Models from Membership Inference Attacks through Disjoint Data Splitting [6.984396318800444]
Diffusion models have been proven to be vulnerable to Membership Inference Attacks (MIAs)
This paper introduces two novel and efficient approaches to protect diffusion models against MIAs.
arXiv Detail & Related papers (2024-10-22T03:02:29Z) - Detecting Training Data of Large Language Models via Expectation Maximization [62.28028046993391]
Membership inference attacks (MIAs) aim to determine whether a specific instance was part of a target model's training data.
Applying MIAs to large language models (LLMs) presents unique challenges due to the massive scale of pre-training data and the ambiguous nature of membership.
We introduce EM-MIA, a novel MIA method for LLMs that iteratively refines membership scores and prefix scores via an expectation-maximization algorithm.
arXiv Detail & Related papers (2024-10-10T03:31:16Z) - MIA-BAD: An Approach for Enhancing Membership Inference Attack and its
Mitigation with Federated Learning [6.510488168434277]
The membership inference attack (MIA) is a popular paradigm for compromising the privacy of a machine learning (ML) model.
We propose an enhanced Membership Inference Attack with the Batch-wise generated Attack dataset (MIA-BAD)
We show how training an ML model through FL, has some distinct advantages and investigate how the threat introduced with the proposed MIA-BAD approach can be mitigated with FL approaches.
arXiv Detail & Related papers (2023-11-28T06:51:26Z) - Practical Membership Inference Attacks Against Large-Scale Multi-Modal
Models: A Pilot Study [17.421886085918608]
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model.
These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data.
This paper takes a first step towards developing practical MIAs against large-scale multi-modal models.
arXiv Detail & Related papers (2023-09-29T19:38:40Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Improving Robustness to Model Inversion Attacks via Mutual Information
Regularization [12.079281416410227]
This paper studies defense mechanisms against model inversion (MI) attacks.
MI is a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model.
We propose the Mutual Information Regularization based Defense (MID) against MI attacks.
arXiv Detail & Related papers (2020-09-11T06:02:44Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z) - On the Effectiveness of Regularization Against Membership Inference
Attacks [26.137849584503222]
Deep learning models often raise privacy concerns as they leak information about their training data.
This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA)
While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically.
arXiv Detail & Related papers (2020-06-09T15:17:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.