Towards Causal Models for Adversary Distractions
- URL: http://arxiv.org/abs/2104.10575v1
- Date: Wed, 21 Apr 2021 15:02:00 GMT
- Title: Towards Causal Models for Adversary Distractions
- Authors: Ron Alford (1), Andy Applebaum (1) ((1) The MITRE Corporation)
- Abstract summary: We show that decoy generation can slow an automated agent's decision process.
This points to the need to explicitly evaluate decoy generation and placement strategies against fast moving, automated adversaries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated adversary emulation is becoming an indispensable tool of network
security operators in testing and evaluating their cyber defenses. At the same
time, it has exposed how quickly adversaries can propagate through the network.
While research has greatly progressed on quality decoy generation to fool human
adversaries, we may need different strategies to slow computer agents. In this
paper, we show that decoy generation can slow an automated agent's decision
process, but that the degree to which it is inhibited is greatly dependent on
the types of objects used. This points to the need to explicitly evaluate decoy
generation and placement strategies against fast moving, automated adversaries.
Related papers
- A Hybrid Training-time and Run-time Defense Against Adversarial Attacks in Modulation Classification [35.061430235135155]
Defense mechanism based on both training-time and run-time defense techniques for protecting machine learning-based radio signal (modulation) classification against adversarial attacks.
Considering a white-box scenario and real datasets, we demonstrate that our proposed techniques outperform existing state-of-the-art technologies.
arXiv Detail & Related papers (2024-07-09T12:28:38Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Autonomous Attack Mitigation for Industrial Control Systems [25.894883701063055]
Defending computer networks from cyber attack requires timely responses to alerts and threat intelligence.
We present a deep reinforcement learning approach to autonomous response and recovery in large industrial control networks.
arXiv Detail & Related papers (2021-11-03T18:08:06Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.