Membership Inference Attacks on Machine Learning: A Survey
- URL: http://arxiv.org/abs/2103.07853v2
- Date: Wed, 17 Mar 2021 03:21:35 GMT
- Title: Membership Inference Attacks on Machine Learning: A Survey
- Authors: Hongsheng Hu and Zoran Salcic and Gillian Dobbie and Xuyun Zhang
- Abstract summary: Membership inference attack aims to identify whether a data sample was used to train a machine learning model or not.
It can raise severe privacy risks as the membership can reveal an individual's sensitive information.
We present the first comprehensive survey of membership inference attacks.
- Score: 6.468846906231666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Membership inference attack aims to identify whether a data sample was used
to train a machine learning model or not. It can raise severe privacy risks as
the membership can reveal an individual's sensitive information. For example,
identifying an individual's participation in a hospital's health analytics
training set reveals that this individual was once a patient in that hospital.
Membership inference attacks have been shown to be effective on various machine
learning models, such as classification models, generative models, and
sequence-to-sequence models. Meanwhile, many methods are proposed to defend
such a privacy attack. Although membership inference attack is an emerging and
rapidly growing research area, there is no comprehensive survey on this topic
yet. In this paper, we bridge this important gap in membership inference attack
literature. We present the first comprehensive survey of membership inference
attacks. We summarize and categorize existing membership inference attacks and
defenses and explicitly present how to implement attacks in various settings.
Besides, we discuss why membership inference attacks work and summarize the
benchmark datasets to facilitate comparison and ensure fairness of future work.
Finally, we propose several possible directions for future research and
possible applications relying on reviewed works.
Related papers
- Inference Attacks: A Taxonomy, Survey, and Promising Directions [44.290208239143126]
This survey provides an in-depth and comprehensive inference of attacks and corresponding countermeasures in ML-as-a-service.
We first propose the 3MP taxonomy based on the community research status, trying to normalize the confusing naming system of inference attacks.
Also, we analyze the pros and cons of each type of inference attack, their workflow, countermeasure, and how they interact with other attacks.
arXiv Detail & Related papers (2024-06-04T07:06:06Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Membership-Doctor: Comprehensive Assessment of Membership Inference
Against Machine Learning Models [11.842337448801066]
We present a large-scale measurement of different membership inference attacks and defenses.
We find that some assumptions of the threat model, such as same-architecture and same-distribution between shadow and target models, are unnecessary.
We are also the first to execute attacks on the real-world data collected from the Internet, instead of laboratory datasets.
arXiv Detail & Related papers (2022-08-22T17:00:53Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - Enhanced Membership Inference Attacks against Machine Learning Models [9.26208227402571]
Membership inference attacks are used to quantify the private information that a model leaks about the individual data points in its training set.
We derive new attack algorithms that can achieve a high AUC score while also highlighting the different factors that affect their performance.
Our algorithms capture a very precise approximation of privacy loss in models, and can be used as a tool to perform an accurate and informed estimation of privacy risk in machine learning models.
arXiv Detail & Related papers (2021-11-18T13:31:22Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Membership Leakage in Label-Only Exposures [10.875144776014533]
We propose decision-based membership inference attacks against machine learning models.
In particular, we develop two types of decision-based attacks, namely transfer attack, and boundary attack.
We also present new insights on the success of membership inference based on quantitative and qualitative analysis.
arXiv Detail & Related papers (2020-07-30T15:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.