A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems
- URL: http://arxiv.org/abs/2109.06358v1
- Date: Mon, 13 Sep 2021 23:11:56 GMT
- Title: A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems
- Authors: Moein Sabounchi, Jin Wei-Kocsis
- Abstract summary: We propose an innovative adversarial attack model that can practically compromise dynamical controls of energy system.
We also optimize the deployment of the proposed adversarial attack model by employing deep reinforcement learning (RL) techniques.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the advances in computing and sensing, deep learning (DL) has widely
been applied in smart energy systems (SESs). These DL-based solutions have
proved their potentials in improving the effectiveness and adaptiveness of the
control systems. However, in recent years, increasing evidence shows that DL
techniques can be manipulated by adversarial attacks with carefully-crafted
perturbations. Adversarial attacks have been studied in computer vision and
natural language processing. However, there is very limited work focusing on
the adversarial attack deployment and mitigation in energy systems. In this
regard, to better prepare the SESs against potential adversarial attacks, we
propose an innovative adversarial attack model that can practically compromise
dynamical controls of energy system. We also optimize the deployment of the
proposed adversarial attack model by employing deep reinforcement learning (RL)
techniques. In this paper, we present our first-stage work in this direction.
In simulation section, we evaluate the performance of our proposed adversarial
attack model using standard IEEE 9-bus system.
Related papers
- A Novel Bifurcation Method for Observation Perturbation Attacks on Reinforcement Learning Agents: Load Altering Attacks on a Cyber Physical Power System [1.7887848708497243]
This work proposes a novel attack technique for continuous control using Group Difference Logits loss with a bifurcation layer.
We demonstrate the impacts of powerful gradient-based attacks in a realistic smart energy environment.
arXiv Detail & Related papers (2024-07-06T20:55:24Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Embodied Laser Attack:Leveraging Scene Priors to Achieve Agent-based Robust Non-contact Attacks [13.726534285661717]
This paper introduces the Embodied Laser Attack (ELA), a novel framework that dynamically tailors non-contact laser attacks.
For the perception module, ELA has innovatively developed a local perspective transformation network, based on the intrinsic prior knowledge of traffic scenes.
For the decision and control module, ELA trains an attack agent with data-driven reinforcement learning instead of adopting time-consuming algorithms.
arXiv Detail & Related papers (2023-12-15T06:16:17Z) - Physics-Constrained Backdoor Attacks on Power System Fault Localization [1.1683938179815823]
This work proposes a novel physics-constrained backdoor poisoning attack.
It embeds the undetectable attack signal into the learned model and only performs the attack when it encounters the corresponding signal.
The proposed attack pipeline can be easily generalized to other power system tasks.
arXiv Detail & Related papers (2022-11-07T12:57:26Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Enhanced Adversarial Strategically-Timed Attacks against Deep
Reinforcement Learning [91.13113161754022]
We introduce timing-based adversarial strategies against a DRL-based navigation system by jamming in physical noise patterns on the selected time frames.
Our experimental results show that the adversarial timing attacks can lead to a significant performance drop.
arXiv Detail & Related papers (2020-02-20T21:39:25Z) - Boosting Adversarial Training with Hypersphere Embedding [53.75693100495097]
Adversarial training is one of the most effective defenses against adversarial attacks for deep learning models.
In this work, we advocate incorporating the hypersphere embedding mechanism into the AT procedure.
We validate our methods under a wide range of adversarial attacks on the CIFAR-10 and ImageNet datasets.
arXiv Detail & Related papers (2020-02-20T08:42:29Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.