Resilient Output Containment Control of Heterogeneous Multiagent Systems
Against Composite Attacks: A Digital Twin Approach
- URL: http://arxiv.org/abs/2303.12693v1
- Date: Wed, 22 Mar 2023 16:41:05 GMT
- Title: Resilient Output Containment Control of Heterogeneous Multiagent Systems
Against Composite Attacks: A Digital Twin Approach
- Authors: Yukang Cui, Lingbo Cao, Michael V. Basin, Jun Shen, Tingwen Huang, Xin
Gong
- Abstract summary: This paper studies the distributed resilient output containment control of heterogeneous multiagent systems against composite attacks.
Inspired by digital twins, a twin layer with higher security and privacy is used to decouple the problem into two tasks.
- Score: 24.587040108605937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies the distributed resilient output containment control of
heterogeneous multiagent systems against composite attacks, including
denial-of-services (DoS) attacks, false-data injection (FDI) attacks,
camouflage attacks, and actuation attacks. Inspired by digital twins, a twin
layer (TL) with higher security and privacy is used to decouple the above
problem into two tasks: defense protocols against DoS attacks on TL and defense
protocols against actuation attacks on cyber-physical layer (CPL). First,
considering modeling errors of leader dynamics, we introduce distributed
observers to reconstruct the leader dynamics for each follower on TL under DoS
attacks. Second, distributed estimators are used to estimate follower states
according to the reconstructed leader dynamics on the TL. Third, according to
the reconstructed leader dynamics, we design decentralized solvers that
calculate the output regulator equations on CPL. Fourth, decentralized adaptive
attack-resilient control schemes that resist unbounded actuation attacks are
provided on CPL. Furthermore, we apply the above control protocols to prove
that the followers can achieve uniformly ultimately bounded (UUB) convergence,
and the upper bound of the UUB convergence is determined explicitly. Finally,
two simulation examples are provided to show the effectiveness of the proposed
control protocols.
Related papers
- Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation [49.480978190805125]
Transfer attacks generate significant interest for black-box applications.
Existing works essentially directly optimize the single-level objective w.r.t. surrogate model.
We propose a bilevel optimization paradigm, which explicitly reforms the nested relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker.
arXiv Detail & Related papers (2024-06-04T07:45:27Z) - Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Exploring Attack Resilience in Distributed Platoon Controllers with
Model Predictive Control [0.0]
This thesis aims to improve the security of distributed vehicle platoon controllers by investigating attack scenarios and assessing their influence on system performance.
Attack techniques, including man-in-the-middle (MITM) and false data injection (FDI), are simulated using Model Predictive Control (MPC) controller.
Countermeasures are offered and tested, that includes attack analysis and reinforced communication protocols using Machine Learning techniques for detection.
arXiv Detail & Related papers (2024-01-08T20:27:16Z) - Model Extraction Attacks Against Reinforcement Learning Based
Controllers [9.273077240506016]
This paper focuses on the setting when a Deep Neural Network (DNN) controller is trained using Reinforcement Learning (RL) algorithms and is used to control a system.
In the first phase, also called the offline phase, the attacker uses side-channel information about the RL-reward function and the system dynamics to identify a set of candidate estimates of the unknown DNN.
In the second phase, also called the online phase, the attacker observes the behavior of the unknown DNN and uses these observations to shortlist the set of final policy estimates.
arXiv Detail & Related papers (2023-04-25T18:48:42Z) - Resilient Output Consensus Control of Heterogeneous Multi-agent Systems
against Byzantine Attacks: A Twin Layer Approach [23.824617731137877]
We study the problem of cooperative control of heterogeneous multi-agent systems (MASs) against Byzantine attacks.
Inspired by the concept of Digital Twin, a new hierarchical protocol equipped with a virtual twin layer (TL) is proposed.
arXiv Detail & Related papers (2023-03-22T18:23:21Z) - Data-Driven Leader-following Consensus for Nonlinear Multi-Agent Systems
against Composite Attacks: A Twins Layer Approach [24.556601453798173]
This paper studies the leader-following consensuses of uncertain and nonlinear multi-agent systems against composite attacks (CAs)
A double-layer control framework is formulated, where a digital twin layer (TL) is added beside the traditional cyber-physical layer (CPL)
The resilient control task against CAs can be divided into two parts: One is distributed estimation against DoS attacks on the TL and the other is resilient decentralized tracking control against actuation attacks on the CPL.
arXiv Detail & Related papers (2023-03-22T17:20:35Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Collision-Free Flocking with a Dynamic Squad of Fixed-Wing UAVs Using
Deep Reinforcement Learning [2.555094847583209]
We deal with the decentralized leader-follower flocking control problem through deep reinforcement learning (DRL)
We propose a novel reinforcement learning algorithm CACER-II for training a shared control policy for all the followers.
As a result, the variable-length system state can be encoded into a fixed-length embedding vector, which makes the learned DRL policies independent with the number or the order of followers.
arXiv Detail & Related papers (2021-01-20T11:23:35Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.