Adversarial Robustness for Machine Learning Cyber Defenses Using Log
Data
- URL: http://arxiv.org/abs/2007.14983v1
- Date: Wed, 29 Jul 2020 17:51:29 GMT
- Title: Adversarial Robustness for Machine Learning Cyber Defenses Using Log
Data
- Authors: Kai Steverson, Jonathan Mullin, Metin Ahiskali
- Abstract summary: We develop a testing framework to evaluate adversarial robustness of machine learning cyber defenses.
We validate our framework using a publicly available dataset and demonstrate that our adversarial attack does succeed against the target systems.
We apply our framework to analyze the influence of different levels of dropout regularization and find that higher dropout levels increases robustness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been considerable and growing interest in applying machine learning
for cyber defenses. One promising approach has been to apply natural language
processing techniques to analyze logs data for suspicious behavior. A natural
question arises to how robust these systems are to adversarial attacks. Defense
against sophisticated attack is of particular concern for cyber defenses. In
this paper, we develop a testing framework to evaluate adversarial robustness
of machine learning cyber defenses, particularly those focused on log data. Our
framework uses techniques from deep reinforcement learning and adversarial
natural language processing. We validate our framework using a publicly
available dataset and demonstrate that our adversarial attack does succeed
against the target systems, revealing a potential vulnerability. We apply our
framework to analyze the influence of different levels of dropout
regularization and find that higher dropout levels increases robustness.
Moreover 90% dropout probability exhibited the highest level of robustness by a
significant margin, which suggests unusually high dropout may be necessary to
properly protect against adversarial attacks.
Related papers
- Untargeted White-box Adversarial Attack with Heuristic Defence Methods
in Real-time Deep Learning based Network Intrusion Detection System [0.0]
In Adversarial Machine Learning (AML), malicious actors aim to fool the Machine Learning (ML) and Deep Learning (DL) models to produce incorrect predictions.
AML is an emerging research domain, and it has become a necessity for the in-depth study of adversarial attacks.
We implement four powerful adversarial attack techniques, namely, Fast Gradient Sign Method (FGSM), Jacobian Saliency Map Attack (JSMA), Projected Gradient Descent (PGD) and Carlini & Wagner (C&W) in NIDS.
arXiv Detail & Related papers (2023-10-05T06:32:56Z) - Robust Adversarial Defense by Tensor Factorization [1.2954493726326113]
This study integrates the tensorization of input data with low-rank decomposition and tensorization of NN parameters to enhance adversarial defense.
The proposed approach demonstrates significant defense capabilities, maintaining robust accuracy even when subjected to the strongest known auto-attacks.
arXiv Detail & Related papers (2023-09-03T04:51:44Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - It Is All About Data: A Survey on the Effects of Data on Adversarial
Robustness [4.1310970179750015]
Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to confuse the model into making a mistake.
To address this problem, the area of adversarial robustness investigates mechanisms behind adversarial attacks and defenses against these attacks.
arXiv Detail & Related papers (2023-03-17T04:18:03Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Support Vector Machines under Adversarial Label Contamination [13.299257835329868]
We evaluate the security of Support Vector Machines (SVMs) to well-crafted, adversarial label noise attacks.
In particular, we consider an attacker that aims to formalize the SVM's classification error by flipping a number of labels.
We argue that our approach can also provide useful insights for developing more secure SVM learning algorithms.
arXiv Detail & Related papers (2022-06-01T09:38:07Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.