Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training
- URL: http://arxiv.org/abs/2110.08956v2
- Date: Tue, 19 Oct 2021 01:43:41 GMT
- Title: Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training
- Authors: Alexander Pan, Yongkyun Lee, Huan Zhang, Yize Chen, Yuanyuan Shi
- Abstract summary: We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
- Score: 71.7750435554693
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Due to the proliferation of renewable energy and its intrinsic intermittency
and stochasticity, current power systems face severe operational challenges.
Data-driven decision-making algorithms from reinforcement learning (RL) offer a
solution towards efficiently operating a clean energy system. Although RL
algorithms achieve promising performance compared to model-based control
models, there has been limited investigation of RL robustness in
safety-critical physical systems. In this work, we first show that several
competition-winning, state-of-the-art RL agents proposed for power system
control are vulnerable to adversarial attacks. Specifically, we use an
adversary Markov Decision Process to learn an attack policy, and demonstrate
the potency of our attack by successfully attacking multiple winning agents
from the Learning To Run a Power Network (L2RPN) challenge, under both
white-box and black-box attack settings. We then propose to use adversarial
training to increase the robustness of RL agent against attacks and avoid
infeasible operational decisions. To the best of our knowledge, our work is the
first to highlight the fragility of grid control RL algorithms, and contribute
an effective defense scheme towards improving their robustness and security.
Related papers
- Beyond CAGE: Investigating Generalization of Learned Autonomous Network
Defense Policies [0.8785883427835897]
This work evaluates several reinforcement learning approaches implemented in the second edition of the CAGE Challenge.
We find that the ensemble RL technique performs strongest, outperforming our other models and taking second place in the competition.
In unseen environments, all of our approaches perform worse, with varied degradation based on the type of environmental change.
arXiv Detail & Related papers (2022-11-28T17:01:24Z) - Training and Evaluation of Deep Policies using Reinforcement Learning
and Generative Models [67.78935378952146]
GenRL is a framework for solving sequential decision-making problems.
It exploits the combination of reinforcement learning and latent variable generative models.
We experimentally determine the characteristics of generative models that have most influence on the performance of the final policy training.
arXiv Detail & Related papers (2022-04-18T22:02:32Z) - Curriculum Based Reinforcement Learning of Grid Topology Controllers to
Prevent Thermal Cascading [0.19116784879310028]
This paper describes how domain knowledge of power system operators can be integrated into reinforcement learning frameworks.
A curriculum-based approach with reward tuning is incorporated into the training procedure by modifying the environment.
A parallel training approach on multiple scenarios is employed to avoid biasing the agent to a few scenarios and make it robust to the natural variability in grid operations.
arXiv Detail & Related papers (2021-12-18T20:32:05Z) - A Practical Adversarial Attack on Contingency Detection of Smart Energy
Systems [0.0]
We propose an innovative adversarial attack model that can practically compromise dynamical controls of energy system.
We also optimize the deployment of the proposed adversarial attack model by employing deep reinforcement learning (RL) techniques.
arXiv Detail & Related papers (2021-09-13T23:11:56Z) - Robust Reinforcement Learning on State Observations with Learned Optimal
Adversary [86.0846119254031]
We study the robustness of reinforcement learning with adversarially perturbed state observations.
With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found.
For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones.
arXiv Detail & Related papers (2021-01-21T05:38:52Z) - Rethink AI-based Power Grid Control: Diving Into Algorithm Design [6.194042945960622]
In this paper, we present an in-depth analysis of DRL-based voltage control fromaspects of algorithm selection, state space representation, and reward engineering.
We propose a novel imitation learning-based approachto directly map power grid operating points to effective actions without any interimreinforcement learning process.
arXiv Detail & Related papers (2020-12-23T23:38:41Z) - Robust Deep Reinforcement Learning through Adversarial Loss [74.20501663956604]
Recent studies have shown that deep reinforcement learning agents are vulnerable to small adversarial perturbations on the agent's inputs.
We propose RADIAL-RL, a principled framework to train reinforcement learning agents with improved robustness against adversarial attacks.
arXiv Detail & Related papers (2020-08-05T07:49:42Z) - Robust Deep Reinforcement Learning against Adversarial Perturbations on
State Observations [88.94162416324505]
A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises.
Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions.
We show that naively applying existing techniques on improving robustness for classification tasks, like adversarial training, is ineffective for many RL tasks.
arXiv Detail & Related papers (2020-03-19T17:59:59Z) - Challenges and Countermeasures for Adversarial Attacks on Deep
Reinforcement Learning [48.49658986576776]
Deep Reinforcement Learning (DRL) has numerous applications in the real world thanks to its outstanding ability in adapting to the surrounding environments.
Despite its great advantages, DRL is susceptible to adversarial attacks, which precludes its use in real-life critical systems and applications.
This paper presents emerging attacks in DRL-based systems and the potential countermeasures to defend against these attacks.
arXiv Detail & Related papers (2020-01-27T10:53:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.