Risk-Aware Control and Optimization for High-Renewable Power Grids
- URL: http://arxiv.org/abs/2204.00950v1
- Date: Sat, 2 Apr 2022 22:58:08 GMT
- Title: Risk-Aware Control and Optimization for High-Renewable Power Grids
- Authors: Neil Barry, Minas Chatzos, Wenbo Chen, Dahye Han, Chaofan Huang,
Roshan Joseph, Michael Klamkin, Seonho Park, Mathieu Tanneau, Pascal Van
Hentenryck, Shangkun Wang, Hanyu Zhang and Haoruo Zhao
- Abstract summary: RAMC project investigates how to move from this deterministic setting into a risk-aware framework.
This paper reviews how RAMC approaches risk-aware market clearing and presents some of its innovations in uncertainty quantification, optimization, and machine learning.
- Score: 11.352041887858322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The transition of the electrical power grid from fossil fuels to renewable
sources of energy raises fundamental challenges to the market-clearing
algorithms that drive its operations. Indeed, the increased stochasticity in
load and the volatility of renewable energy sources have led to significant
increases in prediction errors, affecting the reliability and efficiency of
existing deterministic optimization models. The RAMC project was initiated to
investigate how to move from this deterministic setting into a risk-aware
framework where uncertainty is quantified explicitly and incorporated in the
market-clearing optimizations. Risk-aware market-clearing raises challenges on
its own, primarily from a computational standpoint. This paper reviews how RAMC
approaches risk-aware market clearing and presents some of its innovations in
uncertainty quantification, optimization, and machine learning. Experimental
results on real networks are presented.
Related papers
- GAN-GRID: A Novel Generative Attack on Smart Grid Stability Prediction [53.2306792009435]
We propose GAN-GRID a novel adversarial attack targeting the stability prediction system of a smart grid tailored to real-world constraints.
Our findings reveal that an adversary armed solely with the stability model's output, devoid of data or model knowledge, can craft data classified as stable with an Attack Success Rate (ASR) of 0.99.
arXiv Detail & Related papers (2024-05-20T14:43:46Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Enhancing Cyber-Resilience in Integrated Energy System Scheduling with Demand Response Using Deep Reinforcement Learning [11.223780653355437]
This paper proposes an innovative model-free resilience scheduling method based on state-adversarial deep reinforcement learning (DRL)
The proposed method designs an IDR program to explore the interaction ability of electricity-gas-heat flexible loads.
The state-adversarial soft actor-critic (SA-SAC) algorithm is proposed to mitigate the impact of cyber-attacks on the scheduling strategy.
arXiv Detail & Related papers (2023-11-28T23:29:36Z) - A Stochastic Online Forecast-and-Optimize Framework for Real-Time Energy
Dispatch in Virtual Power Plants under Uncertainty [18.485617498705736]
We propose a real-time uncertainty-aware energy dispatch framework, which is composed of two key elements.
The proposed framework is capable to rapidly adapt to the real-time data distribution, as well as to target on uncertainties caused by data drift, model discrepancy and environment perturbations in the control process.
The framework won the championship in CityLearn Challenge 2022, which provided an influential opportunity to investigate the potential of AI application in the energy domain.
arXiv Detail & Related papers (2023-09-15T00:04:00Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Learning Optimization Proxies for Large-Scale Security-Constrained
Economic Dispatch [11.475805963049808]
Security-Constrained Economic Dispatch (SCED) is a fundamental optimization model for Transmission System Operators (TSO)
This paper proposes to learn an optimization proxy for SCED, i.e., a Machine Learning (ML) model that can predict an optimal solution for SCED in milliseconds.
Numerical experiments are reported on the French transmission system, and demonstrate the approach's ability to produce, within a time frame that is compatible with real-time operations.
arXiv Detail & Related papers (2021-12-27T00:44:06Z) - A Probabilistic Forecast-Driven Strategy for a Risk-Aware Participation
in the Capacity Firming Market [30.828362290032935]
This paper addresses the energy management of a grid-connected renewable generation plant and a battery energy storage device in the capacity firming market.
A recently developed deep learning model known as normalizing flows is used to generate quantile forecasts of renewable generation.
arXiv Detail & Related papers (2021-05-28T13:13:07Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.