Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis
- URL: http://arxiv.org/abs/2312.13697v1
- Date: Thu, 21 Dec 2023 09:54:18 GMT
- Title: Investigation of Multi-stage Attack and Defense Simulation for Data Synthesis
- Authors: Ă–mer Sen, Bozhidar Ivanov, Martin Henze, Andreas Ulbig,
- Abstract summary: This study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid.
It uses attack trees to model the attacker's sequence of steps and a game-theoretic approach to incorporate the defender's actions.
- Score: 2.479074862022315
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The power grid is a critical infrastructure that plays a vital role in modern society. Its availability is of utmost importance, as a loss can endanger human lives. However, with the increasing digitalization of the power grid, it also becomes vulnerable to new cyberattacks that can compromise its availability. To counter these threats, intrusion detection systems are developed and deployed to detect cyberattacks targeting the power grid. Among intrusion detection systems, anomaly detection models based on machine learning have shown potential in detecting unknown attack vectors. However, the scarcity of data for training these models remains a challenge due to confidentiality concerns. To overcome this challenge, this study proposes a model for generating synthetic data of multi-stage cyber attacks in the power grid, using attack trees to model the attacker's sequence of steps and a game-theoretic approach to incorporate the defender's actions. This model aims to create diverse attack data on which machine learning algorithms can be trained.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Threat analysis and adversarial model for Smart Grids [1.7482569079741024]
The cyber domain of this smart power grid opens a new plethora of threats.
Different stakeholders including regulation bodies, industry and academy are making efforts to provide security mechanisms to mitigate and reduce cyber-risks.
Recent work shows a lack of agreement among grid practitioners and academic experts on the feasibility and consequences of academic-proposed threats.
This is in part due to inadequate simulation models which do not evaluate threats based on attackers full capabilities and goals.
arXiv Detail & Related papers (2024-06-17T16:33:46Z) - CARACAS: vehiCular ArchitectuRe for detAiled Can Attacks Simulation [37.89720165358964]
This paper showcases CARACAS, a vehicular model, including component control via CAN messages and attack injection capabilities.
CarACAS showcases the efficacy of this methodology, including a Battery Electric Vehicle (BEV) model, and focuses on attacks targeting torque control in two distinct scenarios.
arXiv Detail & Related papers (2024-06-11T10:16:55Z) - An Approach to Abstract Multi-stage Cyberattack Data Generation for ML-Based IDS in Smart Grids [2.5655761752240505]
We propose a method to generate synthetic data using a graph-based approach for training machine learning models in smart grids.
We use an abstract form of multi-stage cyberattacks defined via graph formulations and simulate the propagation behavior of attacks in the network.
arXiv Detail & Related papers (2023-12-21T11:07:51Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Graph Mining for Cybersecurity: A Survey [61.505995908021525]
The explosive growth of cyber attacks nowadays, such as malware, spam, and intrusions, caused severe consequences on society.
Traditional Machine Learning (ML) based methods are extensively used in detecting cyber threats, but they hardly model the correlations between real-world cyber entities.
With the proliferation of graph mining techniques, many researchers investigated these techniques for capturing correlations between cyber entities and achieving high performance.
arXiv Detail & Related papers (2023-04-02T08:43:03Z) - A Human-in-the-Middle Attack against Object Detection Systems [4.764637544913963]
We propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography.
This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system.
These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.
arXiv Detail & Related papers (2022-08-15T13:21:41Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.