Applications of Reinforcement Learning in Deregulated Power Market: A
Comprehensive Review
- URL: http://arxiv.org/abs/2205.08369v2
- Date: Fri, 12 May 2023 00:48:13 GMT
- Title: Applications of Reinforcement Learning in Deregulated Power Market: A
Comprehensive Review
- Authors: Ziqing Zhu, Ze Hu, Ka Wing Chan, Siqi Bu, Bin Zhou, Shiwei Xia
- Abstract summary: Reinforcement Learning is an emerging machine learning technique with advantages compared with conventional optimization tools.
This paper presents a review of RL applications in deregulated power market operation including bidding and dispatching strategy optimization.
Some RL techniques that have great potentiality to be deployed in bidding and dispatching problems are recommended and discussed.
- Score: 7.2090237123481575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing penetration of renewable generations, along with the
deregulation and marketization of power industry, promotes the transformation
of power market operation paradigms. The optimal bidding strategy and
dispatching methodology under these new paradigms are prioritized concerns for
both market participants and power system operators, with obstacles of
uncertain characteristics, computational efficiency, as well as requirements of
hyperopic decision-making. To tackle these problems, the Reinforcement Learning
(RL), as an emerging machine learning technique with advantages compared with
conventional optimization tools, is playing an increasingly significant role in
both academia and industry. This paper presents a comprehensive review of RL
applications in deregulated power market operation including bidding and
dispatching strategy optimization, based on more than 150 carefully selected
literatures. For each application, apart from a paradigmatic summary of
generalized methodology, in-depth discussions of applicability and obstacles
while deploying RL techniques are also provided. Finally, some RL techniques
that have great potentiality to be deployed in bidding and dispatching problems
are recommended and discussed.
Related papers
- Comparison of Model Predictive Control and Proximal Policy Optimization for a 1-DOF Helicopter System [0.7499722271664147]
This study conducts a comparative analysis of Model Predictive Control (MPC) and Proximal Policy Optimization (PPO), a Deep Reinforcement Learning (DRL) algorithm, applied to a Quanser Aero 2 system.
PPO excels in rise-time and adaptability, making it a promising approach for applications requiring rapid response and adaptability.
arXiv Detail & Related papers (2024-08-28T08:35:34Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Harnessing Deep Q-Learning for Enhanced Statistical Arbitrage in
High-Frequency Trading: A Comprehensive Exploration [0.0]
Reinforcement Learning (RL) is a branch of machine learning where agents learn by interacting with their environment.
This paper dives deep into the integration of RL in statistical arbitrage strategies tailored for High-Frequency Trading (HFT) scenarios.
Through extensive simulations and backtests, our research reveals that RL not only enhances the adaptability of trading strategies but also shows promise in improving profitability metrics and risk-adjusted returns.
arXiv Detail & Related papers (2023-09-13T06:15:40Z) - Domain-adapted Learning and Imitation: DRL for Power Arbitrage [1.6874375111244329]
We propose a collaborative dual-agent reinforcement learning approach for this bi-level simulation and optimization of European power arbitrage trading.
We introduce two new implementations designed to incorporate domain-specific knowledge by imitating the trading behaviours of power traders.
Our study demonstrates that by leveraging domain expertise in a general learning problem, the performance can be improved substantially.
arXiv Detail & Related papers (2023-01-19T23:36:23Z) - Machine learning applications for electricity market agent-based models:
A systematic literature review [68.8204255655161]
Agent-based simulations are used to better understand the dynamics of the electricity market.
Agent-based models provide the opportunity to integrate machine learning and artificial intelligence.
We review 55 papers published between 2016 and 2021 which focus on machine learning applied to agent-based electricity market models.
arXiv Detail & Related papers (2022-06-05T14:52:26Z) - Risk-Aware Control and Optimization for High-Renewable Power Grids [11.352041887858322]
RAMC project investigates how to move from this deterministic setting into a risk-aware framework.
This paper reviews how RAMC approaches risk-aware market clearing and presents some of its innovations in uncertainty quantification, optimization, and machine learning.
arXiv Detail & Related papers (2022-04-02T22:58:08Z) - Learning Optimization Proxies for Large-Scale Security-Constrained
Economic Dispatch [11.475805963049808]
Security-Constrained Economic Dispatch (SCED) is a fundamental optimization model for Transmission System Operators (TSO)
This paper proposes to learn an optimization proxy for SCED, i.e., a Machine Learning (ML) model that can predict an optimal solution for SCED in milliseconds.
Numerical experiments are reported on the French transmission system, and demonstrate the approach's ability to produce, within a time frame that is compatible with real-time operations.
arXiv Detail & Related papers (2021-12-27T00:44:06Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Universal Trading for Order Execution with Oracle Policy Distillation [99.57416828489568]
We propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution.
We show that our framework can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information.
arXiv Detail & Related papers (2021-01-28T05:52:18Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.