Neural Fitted Q Iteration based Optimal Bidding Strategy in Real Time
Reactive Power Market_1
- URL: http://arxiv.org/abs/2101.02456v1
- Date: Thu, 7 Jan 2021 09:44:00 GMT
- Title: Neural Fitted Q Iteration based Optimal Bidding Strategy in Real Time
Reactive Power Market_1
- Authors: Jahnvi Patel, Devika Jay, Balaraman Ravindran, K.Shanti Swarup
- Abstract summary: In real time electricity markets, the objective of generation companies while bidding is to maximize their profit.
Similar studies in reactive power markets have not been reported so far because the network voltage operating conditions have an increased impact on reactive power markets.
The assumption of a suitable probability distribution function is unrealistic, making the strategies adopted in active power markets unsuitable for learning optimal bids in reactive power market mechanisms.
- Score: 16.323822608442836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real time electricity markets, the objective of generation companies while
bidding is to maximize their profit. The strategies for learning optimal
bidding have been formulated through game theoretical approaches and stochastic
optimization problems. Similar studies in reactive power markets have not been
reported so far because the network voltage operating conditions have an
increased impact on reactive power markets than on active power markets.
Contrary to active power markets, the bids of rivals are not directly related
to fuel costs in reactive power markets. Hence, the assumption of a suitable
probability distribution function is unrealistic, making the strategies adopted
in active power markets unsuitable for learning optimal bids in reactive power
market mechanisms. Therefore, a bidding strategy is to be learnt from market
observations and experience in imperfect oligopolistic competition-based
markets. In this paper, a pioneer work on learning optimal bidding strategies
from observation and experience in a three-stage reactive power market is
reported.
Related papers
- Temporal-Aware Deep Reinforcement Learning for Energy Storage Bidding in
Energy and Contingency Reserve Markets [13.03742132147551]
We develop a novel BESS joint bidding strategy that utilizes deep reinforcement learning (DRL) to bid in the spot and contingency frequency control ancillary services markets.
Unlike conventional "black-box" DRL model, our approach is more interpretable and provides valuable insights into the temporal bidding behavior of BESS.
arXiv Detail & Related papers (2024-02-29T12:41:54Z) - HireVAE: An Online and Adaptive Factor Model Based on Hierarchical and
Regime-Switch VAE [113.47287249524008]
It is still an open question to build a factor model that can conduct stock prediction in an online and adaptive setting.
We propose the first deep learning based online and adaptive factor model, HireVAE, at the core of which is a hierarchical latent space that embeds the relationship between the market situation and stock-wise latent factors.
Across four commonly used real stock market benchmarks, the proposed HireVAE demonstrate superior performance in terms of active returns over previous methods.
arXiv Detail & Related papers (2023-06-05T12:58:13Z) - Approximating Energy Market Clearing and Bidding With Model-Based
Reinforcement Learning [0.0]
Multi-agent Reinforcement learning (MARL) is a promising new approach to predicting the expected profit-maximizing behavior of energy market participants in simulation.
We provide a model of the energy market to a basic MARL algorithm in the form of a learned OPF approximation and explicit market rules.
Our experiments demonstrate that the model reduces training time by about one order of magnitude but at the cost of a slightly worse performance.
arXiv Detail & Related papers (2023-03-03T08:26:22Z) - Transferable Energy Storage Bidder [0.0]
This paper presents a novel, versatile, and transferable approach combining model-based optimization with a convolutional long short-term memory network for energy storage.
We test our proposed approach using historical prices from New York State, showing it achieves state-of-the-art results.
We also test a transfer learning approach by pre-training the bidding model using New York data and applying it to arbitrage in Queensland, Australia.
arXiv Detail & Related papers (2023-01-02T01:04:02Z) - Proximal Policy Optimization Based Reinforcement Learning for Joint
Bidding in Energy and Frequency Regulation Markets [6.175137568373435]
Energy arbitrage can be a significant source of revenue for the battery energy storage system (BESS)
It is crucial for the BESS to carefully decide how much capacity to assign to each market to maximize the total profit under uncertain market conditions.
This paper formulates the bidding problem of the BESS as a Markov Decision Process, which enables the BESS to participate in both the spot market and the FCAS market to maximize profit.
arXiv Detail & Related papers (2022-12-13T13:07:31Z) - Machine learning applications for electricity market agent-based models:
A systematic literature review [68.8204255655161]
Agent-based simulations are used to better understand the dynamics of the electricity market.
Agent-based models provide the opportunity to integrate machine learning and artificial intelligence.
We review 55 papers published between 2016 and 2021 which focus on machine learning applied to agent-based electricity market models.
arXiv Detail & Related papers (2022-06-05T14:52:26Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - A Data-Driven Convergence Bidding Strategy Based on Reverse Engineering
of Market Participants' Performance: A Case of California ISO [0.0]
Convergence bidding, a.k.a., virtual bidding, has been widely adopted in wholesale electricity markets in recent years.
It provides opportunities for market participants to arbitrage on the difference between the day-ahead market locational marginal prices and the realtime market locational marginal prices.
We learn, characterize, and evaluate different types of convergence bidding strategies that are currently used by market participants.
arXiv Detail & Related papers (2021-09-19T22:19:10Z) - Exploring market power using deep reinforcement learning for intelligent
bidding strategies [69.3939291118954]
We find that capacity has an impact on the average electricity price in a single year.
The value of $sim$25% and $sim$11% may vary between market structures and countries.
We observe that the use of a market cap of approximately double the average market price has the effect of significantly decreasing this effect and maintaining a competitive market.
arXiv Detail & Related papers (2020-11-08T21:07:42Z) - Learning Strategies in Decentralized Matching Markets under Uncertain
Preferences [91.3755431537592]
We study the problem of decision-making in the setting of a scarcity of shared resources when the preferences of agents are unknown a priori.
Our approach is based on the representation of preferences in a reproducing kernel Hilbert space.
We derive optimal strategies that maximize agents' expected payoffs.
arXiv Detail & Related papers (2020-10-29T03:08:22Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.