A Survey of Privacy Attacks in Machine Learning
- URL: http://arxiv.org/abs/2007.07646v3
- Date: Sat, 16 Sep 2023 15:12:53 GMT
- Title: A Survey of Privacy Attacks in Machine Learning
- Authors: Maria Rigaki and Sebastian Garcia
- Abstract summary: This research is an analysis of more than 40 papers related to privacy attacks against machine learning.
An initial exploration of the causes of privacy leaks is presented, as well as a detailed analysis of the different attacks.
We present an overview of the most commonly proposed defenses and a discussion of the open problems and future directions identified during our analysis.
- Score: 0.7614628596146599
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As machine learning becomes more widely used, the need to study its
implications in security and privacy becomes more urgent. Although the body of
work in privacy has been steadily growing over the past few years, research on
the privacy aspects of machine learning has received less focus than the
security aspects. Our contribution in this research is an analysis of more than
40 papers related to privacy attacks against machine learning that have been
published during the past seven years. We propose an attack taxonomy, together
with a threat model that allows the categorization of different attacks based
on the adversarial knowledge, and the assets under attack. An initial
exploration of the causes of privacy leaks is presented, as well as a detailed
analysis of the different attacks. Finally, we present an overview of the most
commonly proposed defenses and a discussion of the open problems and future
directions identified during our analysis.
Related papers
- A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses [40.77270226912783]
Model Inversion (MI) attacks disclose private information about the training dataset by abusing access to the trained models.
Despite the rapid advances in the field, we lack a comprehensive and systematic overview of existing MI attacks and defenses.
We elaborately analyze and compare numerous recent attacks and defenses on Deep Neural Networks (DNNs) across multiple modalities and learning tasks.
arXiv Detail & Related papers (2024-02-06T14:06:23Z) - "Why do so?" -- A Practical Perspective on Machine Learning Security [21.538956161215555]
We analyze attack occurrence and concern with 139 industrial practitioners.
Our results shed light on real-world attacks on deployed machine learning.
Our work paves the way for more research about adversarial machine learning in practice.
arXiv Detail & Related papers (2022-07-11T19:58:56Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - Privacy Threats Analysis to Secure Federated Learning [34.679990191199224]
We analyze the privacy threats in industrial-level federated learning frameworks with secure computation.
We show through theoretical analysis that it is possible for the attacker to invert the entire private input of the victim.
arXiv Detail & Related papers (2021-06-24T15:02:54Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.