A Survey of Privacy Attacks in Machine Learning
- URL: http://arxiv.org/abs/2007.07646v3
- Date: Sat, 16 Sep 2023 15:12:53 GMT
- Title: A Survey of Privacy Attacks in Machine Learning
- Authors: Maria Rigaki and Sebastian Garcia
- Abstract summary: This research is an analysis of more than 40 papers related to privacy attacks against machine learning.
An initial exploration of the causes of privacy leaks is presented, as well as a detailed analysis of the different attacks.
We present an overview of the most commonly proposed defenses and a discussion of the open problems and future directions identified during our analysis.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As machine learning becomes more widely used, the need to study its
implications in security and privacy becomes more urgent. Although the body of
work in privacy has been steadily growing over the past few years, research on
the privacy aspects of machine learning has received less focus than the
security aspects. Our contribution in this research is an analysis of more than
40 papers related to privacy attacks against machine learning that have been
published during the past seven years. We propose an attack taxonomy, together
with a threat model that allows the categorization of different attacks based
on the adversarial knowledge, and the assets under attack. An initial
exploration of the causes of privacy leaks is presented, as well as a detailed
analysis of the different attacks. Finally, we present an overview of the most
commonly proposed defenses and a discussion of the open problems and future
directions identified during our analysis.
Related papers
- Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses [40.77270226912783]
Model Inversion (MI) attacks disclose private information about the training dataset by abusing access to the trained models.
Despite the rapid advances in the field, we lack a comprehensive and systematic overview of existing MI attacks and defenses.
We elaborately analyze and compare numerous recent attacks and defenses on Deep Neural Networks (DNNs) across multiple modalities and learning tasks.
arXiv Detail & Related papers (2024-02-06T14:06:23Z) - "Why do so?" -- A Practical Perspective on Machine Learning Security [21.538956161215555]
We analyze attack occurrence and concern with 139 industrial practitioners.
Our results shed light on real-world attacks on deployed machine learning.
Our work paves the way for more research about adversarial machine learning in practice.
arXiv Detail & Related papers (2022-07-11T19:58:56Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - Privacy Threats Analysis to Secure Federated Learning [34.679990191199224]
We analyze the privacy threats in industrial-level federated learning frameworks with secure computation.
We show through theoretical analysis that it is possible for the attacker to invert the entire private input of the victim.
arXiv Detail & Related papers (2021-06-24T15:02:54Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.