Defening against Adversarial Denial-of-Service Attacks
- URL: http://arxiv.org/abs/2104.06744v1
- Date: Wed, 14 Apr 2021 09:52:36 GMT
- Title: Defening against Adversarial Denial-of-Service Attacks
- Authors: Nicolas M. M\"uller, Simon Roschmann, Konstantin B\"ottinger
- Abstract summary: Data poisoning is one of the most relevant security threats against machine learning and data-driven technologies.
We propose a new approach of detecting DoS poisoned instances.
We evaluate our defence against two DoS poisoning attacks and seven datasets, and find that it reliably identifies poisoned instances.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data poisoning is one of the most relevant security threats against machine
learning and data-driven technologies. Since many applications rely on
untrusted training data, an attacker can easily craft malicious samples and
inject them into the training dataset to degrade the performance of machine
learning models. As recent work has shown, such Denial-of-Service (DoS) data
poisoning attacks are highly effective. To mitigate this threat, we propose a
new approach of detecting DoS poisoned instances. In comparison to related
work, we deviate from clustering and anomaly detection based approaches, which
often suffer from the curse of dimensionality and arbitrary anomaly threshold
selection. Rather, our defence is based on extracting information from the
training data in such a generalized manner that we can identify poisoned
samples based on the information present in the unpoisoned portion of the data.
We evaluate our defence against two DoS poisoning attacks and seven datasets,
and find that it reliably identifies poisoned instances. In comparison to
related work, our defence improves false positive / false negative rates by at
least 50%, often more.
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Temporal Robustness against Data Poisoning [69.01705108817785]
Data poisoning considers cases when an adversary manipulates the behavior of machine learning algorithms through malicious training data.
We propose a temporal threat model of data poisoning with two novel metrics, earliness and duration, which respectively measure how long an attack started in advance and how long an attack lasted.
arXiv Detail & Related papers (2023-02-07T18:59:19Z) - Amplifying Membership Exposure via Data Poisoning [18.799570863203858]
In this paper, we investigate the third type of exploitation of data poisoning - increasing the risks of privacy leakage of benign training samples.
We propose a set of data poisoning attacks to amplify the membership exposure of the targeted class.
Our results show that the proposed attacks can substantially increase the membership inference precision with minimum overall test-time model performance degradation.
arXiv Detail & Related papers (2022-11-01T13:52:25Z) - Autoregressive Perturbations for Data Poisoning [54.205200221427994]
Data scraping from social media has led to growing concerns regarding unauthorized use of data.
Data poisoning attacks have been proposed as a bulwark against scraping.
We introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset.
arXiv Detail & Related papers (2022-06-08T06:24:51Z) - Accumulative Poisoning Attacks on Real-time Data [56.96241557830253]
We show that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects.
arXiv Detail & Related papers (2021-06-18T08:29:53Z) - Property Inference From Poisoning [15.105224455937025]
Property inference attacks consider an adversary who has access to the trained model and tries to extract some global statistics of the training data.
We study poisoning attacks where the goal of the adversary is to increase the information leakage of the model.
Our findings suggest that poisoning attacks can boost the information leakage significantly and should be considered as a stronger threat model in sensitive applications.
arXiv Detail & Related papers (2021-01-26T20:35:28Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
Data Poisoning Attacks [74.88735178536159]
Data poisoning is the number one concern among threats ranging from model stealing to adversarial attacks.
We observe that data poisoning and backdoor attacks are highly sensitive to variations in the testing setup.
We apply rigorous tests to determine the extent to which we should fear them.
arXiv Detail & Related papers (2020-06-22T18:34:08Z) - A Separation Result Between Data-oblivious and Data-aware Poisoning
Attacks [40.044030156696145]
Poisoning attacks have emerged as a significant security threat to machine learning algorithms.
Some of the stronger poisoning attacks require the full knowledge of the training data.
We show that full-information adversaries are provably stronger than the optimal attacker.
arXiv Detail & Related papers (2020-03-26T16:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.