Autonomous Attack Mitigation for Industrial Control Systems
- URL: http://arxiv.org/abs/2111.02445v1
- Date: Wed, 3 Nov 2021 18:08:06 GMT
- Title: Autonomous Attack Mitigation for Industrial Control Systems
- Authors: John Mern, Kyle Hatch, Ryan Silva, Cameron Hickert, Tamim Sookoor,
Mykel J. Kochenderfer
- Abstract summary: Defending computer networks from cyber attack requires timely responses to alerts and threat intelligence.
We present a deep reinforcement learning approach to autonomous response and recovery in large industrial control networks.
- Score: 25.894883701063055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Defending computer networks from cyber attack requires timely responses to
alerts and threat intelligence. Decisions about how to respond involve
coordinating actions across multiple nodes based on imperfect indicators of
compromise while minimizing disruptions to network operations. Currently,
playbooks are used to automate portions of a response process, but often leave
complex decision-making to a human analyst. In this work, we present a deep
reinforcement learning approach to autonomous response and recovery in large
industrial control networks. We propose an attention-based neural architecture
that is flexible to the size of the network under protection. To train and
evaluate the autonomous defender agent, we present an industrial control
network simulation environment suitable for reinforcement learning. Experiments
show that the learned agent can effectively mitigate advanced attacks that
progress with few observable signals over several months before execution. The
proposed deep reinforcement learning approach outperforms a fully automated
playbook method in simulation, taking less disruptive actions while also
defending more nodes on the network. The learned policy is also more robust to
changes in attacker behavior than playbook approaches.
Related papers
- Multi-Objective Reinforcement Learning for Automated Resilient Cyber Defence [0.0]
Cyber-attacks pose a security threat to military command and control networks, Intelligence, Surveillance, and Reconnaissance (ISR) systems, and civilian critical national infrastructure.
The use of artificial intelligence and autonomous agents in these attacks increases the scale, range, and complexity of this threat and the subsequent disruption they cause.
Autonomous Cyber Defence (ACD) agents aim to mitigate this threat by responding at machine speed and at the scale required to address the problem.
arXiv Detail & Related papers (2024-11-26T16:51:52Z) - Structural Generalization in Autonomous Cyber Incident Response with Message-Passing Neural Networks and Reinforcement Learning [0.0]
Retraining agents for small network changes costs time and energy.
We create variants of the original network with different numbers of hosts and agents are tested without additional training.
Agents using the default vector state representation perform better, but need to be specially trained on each network variant.
arXiv Detail & Related papers (2024-07-08T09:34:22Z) - Inroads into Autonomous Network Defence using Explained Reinforcement
Learning [0.5949779668853555]
This paper introduces an end-to-end methodology for studying attack strategies, designing defence agents and explaining their operation.
We use state diagrams, deep reinforcement learning agents trained on different parts of the task and organised in a shallow hierarchy.
Our evaluation shows that the resulting design achieves a substantial performance improvement compared to prior work.
arXiv Detail & Related papers (2023-06-15T17:53:14Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Reinforcement Learning for Industrial Control Network Cyber Security
Orchestration [27.781221210925498]
We present techniques to scale deep reinforcement learning to solve the cyber security orchestration problem for large industrial control networks.
We propose a novel attention-based neural architecture with size complexity that is invariant to the size of the network under protection.
arXiv Detail & Related papers (2021-06-09T18:44:17Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z) - On Simple Reactive Neural Networks for Behaviour-Based Reinforcement
Learning [5.482532589225552]
We present a behaviour-based reinforcement learning approach, inspired by Brook's subsumption architecture.
Our working assumption is that a pick and place robotic task can be simplified by leveraging domain knowledge of a robotics developer.
Our approach learns the pick and place task in 8,000 episodes, which represents a drastic reduction in the number of training episodes required by an end-to-end approach.
arXiv Detail & Related papers (2020-01-22T11:49:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.