Adversarial Training for a Continuous Robustness Control Problem in
Power Systems
- URL: http://arxiv.org/abs/2012.11390v3
- Date: Fri, 16 Apr 2021 12:05:28 GMT
- Title: Adversarial Training for a Continuous Robustness Control Problem in
Power Systems
- Authors: Lo\"ic Omnes, Antoine Marot, Benjamin Donnot
- Abstract summary: We propose a new adversarial training approach for injecting robustness when designing controllers for upcoming cyber-physical power systems.
We model an adversarial framework, propose the implementation of a fixed opponent policy and test it on a L2RPN (Learning to Run a Power Network) environment.
Using adversarial testing, we analyze the results of submitted trained agents from the robustness track of the L2RPN competition.
- Score: 1.0742675209112622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new adversarial training approach for injecting robustness when
designing controllers for upcoming cyber-physical power systems. Previous
approaches relying deeply on simulations are not able to cope with the rising
complexity and are too costly when used online in terms of computation budget.
In comparison, our method proves to be computationally efficient online while
displaying useful robustness properties. To do so we model an adversarial
framework, propose the implementation of a fixed opponent policy and test it on
a L2RPN (Learning to Run a Power Network) environment. This environment is a
synthetic but realistic modeling of a cyber-physical system accounting for one
third of the IEEE 118 grid. Using adversarial testing, we analyze the results
of submitted trained agents from the robustness track of the L2RPN competition.
We then further assess the performance of these agents in regards to the
continuous N-1 problem through tailored evaluation metrics. We discover that
some agents trained in an adversarial way demonstrate interesting preventive
behaviors in that regard, which we discuss.
Related papers
- Efficient Adversarial Training in LLMs with Continuous Attacks [99.5882845458567]
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their safety guardrails.
We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses.
C-AdvIPO is an adversarial variant of IPO that does not require utility data for adversarially robust alignment.
arXiv Detail & Related papers (2024-05-24T14:20:09Z) - How Robust Are Energy-Based Models Trained With Equilibrium Propagation? [4.374837991804085]
Adrial training is the current state-of-the-art defense against adversarial attacks.
It lowers the model's accuracy on clean inputs, is computationally expensive, and offers less robustness to natural noise.
In contrast, energy-based models (EBMs) incorporate feedback connections from each layer to the previous layer, yielding a recurrent, deep-attractor architecture.
arXiv Detail & Related papers (2024-01-21T16:55:40Z) - Investigating Robustness in Cyber-Physical Systems: Specification-Centric Analysis in the face of System Deviations [8.8690305802668]
A critical attribute of cyber-physical systems (CPS) is robustness, denoting its capacity to operate safely.
This paper proposes a novel specification-based robustness, which characterizes the effectiveness of a controller in meeting a specified system requirement.
We present an innovative two-layer simulation-based analysis framework designed to identify subtle robustness violations.
arXiv Detail & Related papers (2023-11-13T16:44:43Z) - Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - Learning Connectivity-Maximizing Network Configurations [123.01665966032014]
We propose a supervised learning approach with a convolutional neural network (CNN) that learns to place communication agents from an expert.
We demonstrate the performance of our CNN on canonical line and ring topologies, 105k randomly generated test cases, and larger teams not seen during training.
After training, our system produces connected configurations 2 orders of magnitude faster than the optimization-based scheme for teams of 10-20 agents.
arXiv Detail & Related papers (2021-12-14T18:59:01Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Continual Competitive Memory: A Neural System for Online Task-Free
Lifelong Learning [91.3755431537592]
We propose a novel form of unsupervised learning, continual competitive memory ( CCM)
The resulting neural system is shown to offer an effective approach for combating catastrophic forgetting in online continual classification problems.
We demonstrate that the proposed CCM system not only outperforms other competitive learning neural models but also yields performance that is competitive with several modern, state-of-the-art lifelong learning approaches.
arXiv Detail & Related papers (2021-06-24T20:12:17Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Automated Adversary Emulation for Cyber-Physical Systems via
Reinforcement Learning [4.763175424744536]
We develop an automated, domain-aware approach to adversary emulation for cyber-physical systems.
We formulate a Markov Decision Process (MDP) model to determine an optimal attack sequence over a hybrid attack graph.
We apply model-based and model-free reinforcement learning (RL) methods to solve the discrete-continuous MDP in a tractable fashion.
arXiv Detail & Related papers (2020-11-09T18:44:29Z) - On the Generalization Properties of Adversarial Training [21.79888306754263]
This paper studies the generalization performance of a generic adversarial training algorithm.
A series of numerical studies are conducted to demonstrate how the smoothness and L1 penalization help improve the adversarial robustness of models.
arXiv Detail & Related papers (2020-08-15T02:32:09Z) - Falsification-Based Robust Adversarial Reinforcement Learning [13.467693018395863]
falsification-based RARL (FRARL) is the first generic framework for integrating temporal logic falsification in adversarial learning to improve policy robustness.
Our experimental results demonstrate that policies trained with a falsification-based adversary generalize better and show less violation of the safety specification in test scenarios.
arXiv Detail & Related papers (2020-07-01T18:32:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.