DOLOS: A Novel Architecture for Moving Target Defense
- URL: http://arxiv.org/abs/2303.00387v2
- Date: Wed, 27 Sep 2023 14:22:38 GMT
- Title: DOLOS: A Novel Architecture for Moving Target Defense
- Authors: Giulio Pagnotta, Fabio De Gaspari, Dorjan Hitaj, Mauro Andreolini,
Michele Colajanni, Luigi V. Mancini
- Abstract summary: Moving Target Defense and Cyber Deception emerged in recent years as two key proactive cyber defense approaches.
This paper presents DOLOS, a novel architecture that unifies Cyber Deception and Moving Target Defense approaches.
We show that DOLOS is highly effective in slowing down attacks and protecting the integrity of production systems.
- Score: 3.2249474972573555
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Moving Target Defense and Cyber Deception emerged in recent years as two key
proactive cyber defense approaches, contrasting with the static nature of the
traditional reactive cyber defense. The key insight behind these approaches is
to impose an asymmetric disadvantage for the attacker by using deception and
randomization techniques to create a dynamic attack surface. Moving Target
Defense typically relies on system randomization and diversification, while
Cyber Deception is based on decoy nodes and fake systems to deceive attackers.
However, current Moving Target Defense techniques are complex to manage and can
introduce high overheads, while Cyber Deception nodes are easily recognized and
avoided by adversaries. This paper presents DOLOS, a novel architecture that
unifies Cyber Deception and Moving Target Defense approaches. DOLOS is
motivated by the insight that deceptive techniques are much more powerful when
integrated into production systems rather than deployed alongside them. DOLOS
combines typical Moving Target Defense techniques, such as randomization,
diversity, and redundancy, with cyber deception and seamlessly integrates them
into production systems through multiple layers of isolation. We extensively
evaluate DOLOS against a wide range of attackers, ranging from automated
malware to professional penetration testers, and show that DOLOS is highly
effective in slowing down attacks and protecting the integrity of production
systems. We also provide valuable insights and considerations for the future
development of MTD techniques based on our findings.
Related papers
- Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [70.93622520400385]
This paper systematically quantifies the robustness of VLA-based robotic systems.
We introduce an untargeted position-aware attack objective that leverages spatial foundations to destabilize robotic actions.
We also design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - A Factored MDP Approach To Moving Target Defense With Dynamic Threat Modeling and Cost Efficiency [20.367958942737523]
Moving Target Defense (MTD) has emerged as a proactive and dynamic framework to counteract evolving cyber threats.
This paper introduces a novel approach to MTD using a Markov Decision Process (MDP) model that does not rely on predefined attacker payoffs.
arXiv Detail & Related papers (2024-08-16T09:38:59Z) - A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - A Proactive Decoy Selection Scheme for Cyber Deception using MITRE ATT&CK [0.9831489366502301]
Cyber deception allows compensating the late response of defenders to the ever evolving tactics, techniques, and procedures (TTPs) of attackers.
In this work, we design a decoy selection scheme that is supported by an adversarial modeling based on empirical observation of real-world attackers.
Results reveal that the proposed scheme provides the highest interception rate of attack paths using the lowest amount of decoys.
arXiv Detail & Related papers (2024-04-19T10:45:05Z) - Use of Graph Neural Networks in Aiding Defensive Cyber Operations [2.1874189959020427]
Graph Neural Networks have emerged as a promising approach for enhancing the effectiveness of defensive measures.
We look into the application of GNNs in aiding to break each stage of one of the most renowned attack life cycles, the Lockheed Martin Cyber Kill Chain.
arXiv Detail & Related papers (2024-01-11T05:56:29Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.