Support Vector Machines under Adversarial Label Contamination
- URL: http://arxiv.org/abs/2206.00352v1
- Date: Wed, 1 Jun 2022 09:38:07 GMT
- Title: Support Vector Machines under Adversarial Label Contamination
- Authors: Huang Xiao, Battista Biggio, Blaine Nelson, Han Xiao, Claudia Eckert,
Fabio Roli
- Abstract summary: We evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks.
In particular, we consider an attacker that aims to formalize the SVM's classification error by flipping a number of labels.
We argue that our approach can also provide useful insights for developing more secure SVM learning algorithms.
- Score: 13.299257835329868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms are increasingly being applied in
security-related tasks such as spam and malware detection, although their
security properties against deliberate attacks have not yet been widely
understood. Intelligent and adaptive attackers may indeed exploit specific
vulnerabilities exposed by machine learning techniques to violate system
security. Being robust to adversarial data manipulation is thus an important,
additional requirement for machine learning algorithms to successfully operate
in adversarial settings. In this work, we evaluate the security of Support
Vector Machines (SVMs) to well-crafted, adversarial label noise attacks. In
particular, we consider an attacker that aims to maximize the SVM's
classification error by flipping a number of labels in the training data. We
formalize a corresponding optimal attack strategy, and solve it by means of
heuristic approaches to keep the computational complexity tractable. We report
an extensive experimental analysis on the effectiveness of the considered
attacks against linear and non-linear SVMs, both on synthetic and real-world
datasets. We finally argue that our approach can also provide useful insights
for developing more secure SVM learning algorithms, and also novel techniques
in a number of related research areas, such as semi-supervised and active
learning.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Threats, Attacks, and Defenses in Machine Unlearning: A Survey [14.03428437751312]
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI.
This survey aims to fill the gap between the extensive number of studies on threats, attacks, and defenses in machine unlearning.
arXiv Detail & Related papers (2024-03-20T15:40:18Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Adversarial Robustness for Machine Learning Cyber Defenses Using Log
Data [0.0]
We develop a testing framework to evaluate adversarial robustness of machine learning cyber defenses.
We validate our framework using a publicly available dataset and demonstrate that our adversarial attack does succeed against the target systems.
We apply our framework to analyze the influence of different levels of dropout regularization and find that higher dropout levels increases robustness.
arXiv Detail & Related papers (2020-07-29T17:51:29Z) - Opportunities and Challenges in Deep Learning Adversarial Robustness: A
Survey [1.8782750537161614]
This paper studies strategies to implement adversary robustly trained algorithms towards guaranteeing safety in machine learning algorithms.
We provide a taxonomy to classify adversarial attacks and defenses, formulate the Robust Optimization problem in a min-max setting, and divide it into 3 subcategories, namely: Adversarial (re)Training, Regularization Approach, and Certified Defenses.
arXiv Detail & Related papers (2020-07-01T21:00:32Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z) - Security of Distributed Machine Learning: A Game-Theoretic Approach to
Design Secure DSVM [31.480769801354413]
This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks.
We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels.
The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.
arXiv Detail & Related papers (2020-03-08T18:54:17Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.