A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown
Dynamical Systems under Attacks
- URL: http://arxiv.org/abs/2102.00573v1
- Date: Mon, 1 Feb 2021 00:34:38 GMT
- Title: A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown
Dynamical Systems under Attacks
- Authors: Sayak Mukherjee, Veronica Adetola
- Abstract summary: This paper presents a secure reinforcement learning (RL) based control method for unknown linear time-invariant cyber-physical systems (CPSs)
We consider the attack scenario where the attacker learns about the dynamic model during the exploration phase of the learning conducted by the designer.
We propose a dynamic camouflaging based attack-resilient reinforcement learning (ARRL) algorithm which can learn the desired optimal controller for the dynamic system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a secure reinforcement learning (RL) based control method
for unknown linear time-invariant cyber-physical systems (CPSs) that are
subjected to compositional attacks such as eavesdropping and covert attack. We
consider the attack scenario where the attacker learns about the dynamic model
during the exploration phase of the learning conducted by the designer to learn
a linear quadratic regulator (LQR), and thereafter, use such information to
conduct a covert attack on the dynamic system, which we refer to as doubly
learning-based control and attack (DLCA) framework. We propose a dynamic
camouflaging based attack-resilient reinforcement learning (ARRL) algorithm
which can learn the desired optimal controller for the dynamic system, and at
the same time, can inject sufficient misinformation in the estimation of system
dynamics by the attacker. The algorithm is accompanied by theoretical
guarantees and extensive numerical experiments on a consensus multi-agent
system and on a benchmark power grid model.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Model Extraction Attacks Against Reinforcement Learning Based
Controllers [9.273077240506016]
This paper focuses on the setting when a Deep Neural Network (DNN) controller is trained using Reinforcement Learning (RL) algorithms and is used to control a system.
In the first phase, also called the offline phase, the attacker uses side-channel information about the RL-reward function and the system dynamics to identify a set of candidate estimates of the unknown DNN.
In the second phase, also called the online phase, the attacker observes the behavior of the unknown DNN and uses these observations to shortlist the set of final policy estimates.
arXiv Detail & Related papers (2023-04-25T18:48:42Z) - Deep Reinforcement Learning for Cyber System Defense under Dynamic
Adversarial Uncertainties [5.78419291062552]
We propose a data-driven deep reinforcement learning framework to learn proactive, context-aware defense countermeasures.
A dynamic defense optimization problem is formulated with multiple protective postures against different types of adversaries.
arXiv Detail & Related papers (2023-02-03T08:33:33Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Learning-Based Vulnerability Analysis of Cyber-Physical Systems [10.066594071800337]
This work focuses on the use of deep learning for vulnerability analysis of cyber-physical systems.
We consider a control architecture widely used in CPS (e.g., robotics) where the low-level control is based on e.g., the extended Kalman filter (EKF) and an anomaly detector.
To facilitate analyzing the impact potential sensing attacks could have, our objective is to develop learning-enabled attack generators.
arXiv Detail & Related papers (2021-03-10T06:52:26Z) - An RL-Based Adaptive Detection Strategy to Secure Cyber-Physical Systems [0.0]
Increased dependence on software based control has escalated the vulnerabilities of Cyber Physical Systems.
We propose a Reinforcement Learning (RL) based framework which adaptively sets the parameters of such detectors based on experience learned from attack scenarios.
arXiv Detail & Related papers (2021-03-04T07:38:50Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.