Deep Reinforcement Learning for Cyber System Defense under Dynamic
Adversarial Uncertainties
- URL: http://arxiv.org/abs/2302.01595v1
- Date: Fri, 3 Feb 2023 08:33:33 GMT
- Title: Deep Reinforcement Learning for Cyber System Defense under Dynamic
Adversarial Uncertainties
- Authors: Ashutosh Dutta, Samrat Chatterjee, Arnab Bhattacharya, Mahantesh
Halappanavar
- Abstract summary: We propose a data-driven deep reinforcement learning framework to learn proactive, context-aware defense countermeasures.
A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries.
- Score: 5.78419291062552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development of autonomous cyber system defense strategies and action
recommendations in the real-world is challenging, and includes characterizing
system state uncertainties and attack-defense dynamics. We propose a
data-driven deep reinforcement learning (DRL) framework to learn proactive,
context-aware, defense countermeasures that dynamically adapt to evolving
adversarial behaviors while minimizing loss of cyber system operations. A
dynamic defense optimization problem is formulated with multiple protective
postures against different types of adversaries with varying levels of skill
and persistence. A custom simulation environment was developed and experiments
were devised to systematically evaluate the performance of four model-free DRL
algorithms against realistic, multi-stage attack sequences. Our results suggest
the efficacy of DRL algorithms for proactive cyber defense under multi-stage
attack profiles and system uncertainties.
Related papers
- Optimizing Cyber Defense in Dynamic Active Directories through Reinforcement Learning [10.601458163651582]
This paper addresses the absence of effective edge-blocking ACO strategies in dynamic, real-world networks.
It specifically targets the cybersecurity vulnerabilities of organizational Active Directory (AD) systems.
Unlike the existing literature on edge-blocking defenses which considers AD systems as static entities, our study counters this by recognizing their dynamic nature.
arXiv Detail & Related papers (2024-06-28T01:37:46Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Continual Adversarial Defense [37.37029638528458]
A defense system continuously collects adversarial data online to quickly improve itself.
Continual adaptation to new attacks without catastrophic forgetting, few-shot adaptation, memory-efficient adaptation, and high accuracy on both clean and adversarial data.
In particular, CAD is capable of quickly adapting with minimal budget and a low cost of defense failure while maintaining good performance against previous attacks.
arXiv Detail & Related papers (2023-12-15T01:38:26Z) - Towards Adversarial Realism and Robust Learning for IoT Intrusion
Detection and Classification [0.0]
The Internet of Things (IoT) faces tremendous security challenges.
The increasing threat posed by adversarial attacks restates the need for reliable defense strategies.
This work describes the types of constraints required for an adversarial cyber-attack example to be realistic.
arXiv Detail & Related papers (2023-01-30T18:00:28Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown
Dynamical Systems under Attacks [0.0]
This paper presents a secure reinforcement learning (RL) based control method for unknown linear time-invariant cyber-physical systems (CPSs)
We consider the attack scenario where the attacker learns about the dynamic model during the exploration phase of the learning conducted by the designer.
We propose a dynamic camouflaging based attack-resilient reinforcement learning (ARRL) algorithm which can learn the desired optimal controller for the dynamic system.
arXiv Detail & Related papers (2021-02-01T00:34:38Z) - Automated Adversary Emulation for Cyber-Physical Systems via
Reinforcement Learning [4.763175424744536]
We develop an automated, domain-aware approach to adversary emulation for cyber-physical systems.
We formulate a Markov Decision Process (MDP) model to determine an optimal attack sequence over a hybrid attack graph.
We apply model-based and model-free reinforcement learning (RL) methods to solve the discrete-continuous MDP in a tractable fashion.
arXiv Detail & Related papers (2020-11-09T18:44:29Z) - Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
Adversarial Robustness [79.47619798416194]
Learn2Perturb is an end-to-end feature perturbation learning approach for improving the adversarial robustness of deep neural networks.
Inspired by the Expectation-Maximization, an alternating back-propagation training algorithm is introduced to train the network and noise parameters consecutively.
arXiv Detail & Related papers (2020-03-02T18:27:35Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.