Automated Adversary Emulation for Cyber-Physical Systems via
Reinforcement Learning
- URL: http://arxiv.org/abs/2011.04635v1
- Date: Mon, 9 Nov 2020 18:44:29 GMT
- Title: Automated Adversary Emulation for Cyber-Physical Systems via
Reinforcement Learning
- Authors: Arnab Bhattacharya, Thiagarajan Ramachandran, Sandeep Banik, Chase P.
Dowling, Shaunak D. Bopardikar
- Abstract summary: We develop an automated, domain-aware approach to adversary emulation for cyber-physical systems.
We formulate a Markov Decision Process (MDP) model to determine an optimal attack sequence over a hybrid attack graph.
We apply model-based and model-free reinforcement learning (RL) methods to solve the discrete-continuous MDP in a tractable fashion.
- Score: 4.763175424744536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversary emulation is an offensive exercise that provides a comprehensive
assessment of a system's resilience against cyber attacks. However, adversary
emulation is typically a manual process, making it costly and hard to deploy in
cyber-physical systems (CPS) with complex dynamics, vulnerabilities, and
operational uncertainties. In this paper, we develop an automated, domain-aware
approach to adversary emulation for CPS. We formulate a Markov Decision Process
(MDP) model to determine an optimal attack sequence over a hybrid attack graph
with cyber (discrete) and physical (continuous) components and related physical
dynamics. We apply model-based and model-free reinforcement learning (RL)
methods to solve the discrete-continuous MDP in a tractable fashion. As a
baseline, we also develop a greedy attack algorithm and compare it with the RL
procedures. We summarize our findings through a numerical study on sensor
deception attacks in buildings to compare the performance and solution quality
of the proposed algorithms.
Related papers
- Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Deep Reinforcement Learning for Cyber System Defense under Dynamic
Adversarial Uncertainties [5.78419291062552]
We propose a data-driven deep reinforcement learning framework to learn proactive, context-aware defense countermeasures.
A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries.
arXiv Detail & Related papers (2023-02-03T08:33:33Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Reinforcement learning for automatic quadrilateral mesh generation: a
soft actor-critic approach [26.574242660728864]
This paper proposes, implements, and evaluates a Reinforcement Learning based computational framework for automatic mesh generation.
Mesh generation plays a fundamental role in numerical simulations in the area of finite element analysis (FEA) and computational fluid dynamics (CFD)
arXiv Detail & Related papers (2022-03-19T21:49:05Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Learning-Based Vulnerability Analysis of Cyber-Physical Systems [10.066594071800337]
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems.
We consider a control architecture widely used in CPS (e.g., robotics) where the low-level control is based on e.g., the extended Kalman filter (EKF) and an anomaly detector.
To facilitate analyzing the impact potential sensing attacks could have, our objective is to develop learning-enabled attack generators.
arXiv Detail & Related papers (2021-03-10T06:52:26Z) - A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown
Dynamical Systems under Attacks [0.0]
This paper presents a secure reinforcement learning (RL) based control method for unknown linear time-invariant cyber-physical systems (CPSs)
We consider the attack scenario where the attacker learns about the dynamic model during the exploration phase of the learning conducted by the designer.
We propose a dynamic camouflaging based attack-resilient reinforcement learning (ARRL) algorithm which can learn the desired optimal controller for the dynamic system.
arXiv Detail & Related papers (2021-02-01T00:34:38Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z) - Adversarial Training for a Continuous Robustness Control Problem in
Power Systems [1.0742675209112622]
We propose a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems.
We model an adversarial framework, propose the implementation of a fixed opponent policy and test it on a L2RPN (Learning to Run a Power Network) environment.
Using adversarial testing, we analyze the results of submitted trained agents from the robustness track of the L2RPN competition.
arXiv Detail & Related papers (2020-12-21T14:42:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.