A Survey on Poisoning Attacks Against Supervised Machine Learning
- URL: http://arxiv.org/abs/2202.02510v2
- Date: Tue, 8 Feb 2022 02:06:14 GMT
- Title: A Survey on Poisoning Attacks Against Supervised Machine Learning
- Authors: Wenjun Qiu
- Abstract summary: We present a survey paper to cover the most representative papers in poisoning attacks against supervised machine learning models.
We summarize and compare the methodology and limitations of existing literature.
We conclude this paper with potential improvements and future directions to further exploit and prevent poisoning attacks on supervised models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rise of artificial intelligence and machine learning in modern
computing, one of the major concerns regarding such techniques is to provide
privacy and security against adversaries. We present this survey paper to cover
the most representative papers in poisoning attacks against supervised machine
learning models. We first provide a taxonomy to categorize existing studies and
then present detailed summaries for selected papers. We summarize and compare
the methodology and limitations of existing literature. We conclude this paper
with potential improvements and future directions to further exploit and
prevent poisoning attacks on supervised models. We propose several unanswered
research questions to encourage and inspire researchers for future work.
Related papers
- Adaptive Anomaly Detection for Identifying Attacks in Cyber-Physical Systems: A Systematic Literature Review [4.580544659826873]
We present a systematic literature review ( SLR) on Adaptive anomaly detection (AAD) research.
AAD is among the most promising techniques to detect evolving cyberattacks.
We introduce a novel taxonomy considering attack types, CPS application, learning paradigm, data management, and algorithms.
We aim to help researchers to advance the state of the art and help practitioners to become familiar with recent progress in this field.
arXiv Detail & Related papers (2024-11-21T16:32:02Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - A Survey of Defenses against AI-generated Visual Media: Detection, Disruption, and Authentication [15.879482578829489]
Deep generative models have demonstrated impressive performance in various computer vision applications.
These models may be used for malicious purposes, such as misinformation, deception, and copyright violation.
This paper provides a systematic and timely review of research efforts on defenses against AI-generated visual media.
arXiv Detail & Related papers (2024-07-15T09:46:02Z) - Towards more Practical Threat Models in Artificial Intelligence Security [66.67624011455423]
Recent works have identified a gap between research and practice in artificial intelligence security.
We revisit the threat models of the six most studied attacks in AI security research and match them to AI usage in practice.
arXiv Detail & Related papers (2023-11-16T16:09:44Z) - Adversarial attacks and defenses in explainable artificial intelligence:
A survey [11.541601343587917]
Recent advances in adversarial machine learning (AdvML) highlight the limitations and vulnerabilities of state-of-the-art explanation methods.
This survey provides a comprehensive overview of research concerning adversarial attacks on explanations of machine learning models.
arXiv Detail & Related papers (2023-06-06T09:53:39Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Wild Patterns Reloaded: A Survey of Machine Learning Security against
Training Data Poisoning [32.976199681542845]
We provide a comprehensive systematization of poisoning attacks and defenses in machine learning.
We start by categorizing the current threat models and attacks, and then organize existing defenses accordingly.
We argue that our systematization also encompasses state-of-the-art attacks and defenses for other data modalities.
arXiv Detail & Related papers (2022-05-04T11:00:26Z) - Poisoning Attacks and Defenses on Artificial Intelligence: A Survey [3.706481388415728]
Data poisoning attacks represent a type of attack that consists of tampering the data samples fed to the model during the training phase, leading to a degradation in the models accuracy during the inference phase.
This work compiles the most relevant insights and findings found in the latest existing literatures addressing this type of attacks.
A thorough assessment is performed on the reviewed works, comparing the effects of data poisoning on a wide range of ML models in real-world conditions.
arXiv Detail & Related papers (2022-02-21T14:43:38Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.