Advancing Attack-Resilient Scheduling of Integrated Energy Systems with
Demand Response via Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2311.17941v1
- Date: Tue, 28 Nov 2023 23:29:36 GMT
- Title: Advancing Attack-Resilient Scheduling of Integrated Energy Systems with
Demand Response via Deep Reinforcement Learning
- Authors: Yang Li, Wenjie Ma, Yuanzheng Li, Sen Li, Zhe Chen
- Abstract summary: This paper proposes an innovative model-free resilience scheduling method based on state-adversarial deep reinforcement learning (DRL) for integrated demand response (IDR)-enabled IES.
We show that our method is capable of adequately addressing the uncertainties resulting from RES and loads, mitigating the impact of cyber-attacks on the scheduling strategy, and ensuring a stable demand supply for various energy sources.
- Score: 12.759244879222758
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimally scheduling multi-energy flow is an effective method to utilize
renewable energy sources (RES) and improve the stability and economy of
integrated energy systems (IES). However, the stable demand-supply of IES faces
challenges from uncertainties that arise from RES and loads, as well as the
increasing impact of cyber-attacks with advanced information and communication
technologies adoption. To address these challenges, this paper proposes an
innovative model-free resilience scheduling method based on state-adversarial
deep reinforcement learning (DRL) for integrated demand response (IDR)-enabled
IES. The proposed method designs an IDR program to explore the interaction
ability of electricity-gas-heat flexible loads. Additionally, a
state-adversarial Markov decision process (SA-MDP) model characterizes the
energy scheduling problem of IES under cyber-attack. The state-adversarial soft
actor-critic (SA-SAC) algorithm is proposed to mitigate the impact of
cyber-attacks on the scheduling strategy. Simulation results demonstrate that
our method is capable of adequately addressing the uncertainties resulting from
RES and loads, mitigating the impact of cyber-attacks on the scheduling
strategy, and ensuring a stable demand supply for various energy sources.
Moreover, the proposed method demonstrates resilience against cyber-attacks.
Compared to the original soft actor-critic (SAC) algorithm, it achieves a 10\%
improvement in economic performance under cyber-attack scenarios.
Related papers
- TTP-Based Cyber Resilience Index: A Probabilistic Quantitative Approach to Measure Defence Effectiveness Against Cyber Attacks [0.36832029288386137]
This paper introduces the Cyber Resilience Index (CRI), a TTP-based probabilistic approach to quantifying an organisation's defence effectiveness against cyber-attacks (campaigns)
We present a mathematical model that translates complex threat intelligence into an actionable, unified metric similar to a stock market index, that executives can understand and interact with while teams can act upon.
arXiv Detail & Related papers (2024-06-27T17:51:48Z) - GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction [53.2306792009435]
We propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints.
Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99.
arXiv Detail & Related papers (2024-05-20T14:43:46Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Optimal Planning of Hybrid Energy Storage Systems using Curtailed
Renewable Energy through Deep Reinforcement Learning [0.0]
We propose a sophisticated deep reinforcement learning (DRL) methodology with a policy-based algorithm to plan energy storage systems (ESS)
A quantitative performance comparison proved that the DRL agent outperforms the scenario-based optimization (SO) algorithm.
The corresponding results confirmed that the DRL agent learns the way like what a human expert would do, suggesting reliable application of the proposed methodology.
arXiv Detail & Related papers (2022-12-12T02:24:50Z) - Risk-Aware Control and Optimization for High-Renewable Power Grids [11.352041887858322]
RAMC project investigates how to move from this deterministic setting into a risk-aware framework.
This paper reviews how RAMC approaches risk-aware market clearing and presents some of its innovations in uncertainty quantification, optimization, and machine learning.
arXiv Detail & Related papers (2022-04-02T22:58:08Z) - Learning Optimization Proxies for Large-Scale Security-Constrained
Economic Dispatch [11.475805963049808]
Security-Constrained Economic Dispatch (SCED) is a fundamental optimization model for Transmission System Operators (TSO)
This paper proposes to learn an optimization proxy for SCED, i.e., a Machine Learning (ML) model that can predict an optimal solution for SCED in milliseconds.
Numerical experiments are reported on the French transmission system, and demonstrate the approach's ability to produce, within a time frame that is compatible with real-time operations.
arXiv Detail & Related papers (2021-12-27T00:44:06Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - RIS Enhanced Massive Non-orthogonal Multiple Access Networks: Deployment
and Passive Beamforming Design [116.88396201197533]
A novel framework is proposed for the deployment and passive beamforming design of a reconfigurable intelligent surface (RIS)
The problem of joint deployment, phase shift design, as well as power allocation is formulated for maximizing the energy efficiency.
A novel long short-term memory (LSTM) based echo state network (ESN) algorithm is proposed to predict users' tele-traffic demand by leveraging a real dataset.
A decaying double deep Q-network (D3QN) based position-acquisition and phase-control algorithm is proposed to solve the joint problem of deployment and design of the RIS.
arXiv Detail & Related papers (2020-01-28T14:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.