Deep Reinforcement Learning-Based Bidding Strategies for Prosumers Trading in Double Auction-Based Transactive Energy Market
- URL: http://arxiv.org/abs/2502.15774v1
- Date: Sun, 16 Feb 2025 21:38:21 GMT
- Title: Deep Reinforcement Learning-Based Bidding Strategies for Prosumers Trading in Double Auction-Based Transactive Energy Market
- Authors: Jun Jiang, Yuanliang Li, Luyang Hou, Mohsen Ghafouri, Peng Zhang, Jun Yan, Yuhong Liu,
- Abstract summary: A community-based double auction market is considered a promising TEM that can encourage prosumers to participate and maximize social welfare.<n>In this study, we propose a double auction-based TEM with multiple DERs-equipped prosumers to transparently and efficiently manage energy transactions.<n>We also propose a deep reinforcement learning (DRL) model with distributed learning and execution to ensure the scalability and privacy of the market environment.
- Score: 10.071307102216371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the large number of prosumers deploying distributed energy resources (DERs), integrating these prosumers into a transactive energy market (TEM) is a trend for the future smart grid. A community-based double auction market is considered a promising TEM that can encourage prosumers to participate and maximize social welfare. However, the traditional TEM is challenging to model explicitly due to the random bidding behavior of prosumers and uncertainties caused by the energy operation of DERs. Furthermore, although reinforcement learning algorithms provide a model-free solution to optimize prosumers' bidding strategies, their use in TEM is still challenging due to their scalability, stability, and privacy protection limitations. To address the above challenges, in this study, we design a double auction-based TEM with multiple DERs-equipped prosumers to transparently and efficiently manage energy transactions. We also propose a deep reinforcement learning (DRL) model with distributed learning and execution to ensure the scalability and privacy of the market environment. Additionally, the design of two bidding actions (i.e., bidding price and quantity) optimizes the bidding strategies for prosumers. Simulation results show that (1) the designed TEM and DRL model are robust; (2) the proposed DRL model effectively balances the energy payment and comfort satisfaction for prosumers and outperforms the state-of-the-art methods in optimizing the bidding strategies.
Related papers
- Vision-Language Navigation with Energy-Based Policy [66.04379819772764]
Vision-language navigation (VLN) requires an agent to execute actions following human instructions.
We propose an Energy-based Navigation Policy (ENP) to model the joint state-action distribution.
ENP achieves promising performances on R2R, REVERIE, RxR, and R2R-CE.
arXiv Detail & Related papers (2024-10-18T08:01:36Z) - Reinforcement Learning Based Bidding Framework with High-dimensional Bids in Power Markets [3.8066343577384796]
We propose a framework to fully utilize HDBs for RL-based bidding methods.
First, we employ a special type of neural network called Neural Network Supply Functions (NNSFs) to generate HDBs in the form of N price-power pairs.
Second, we embed the NNSF into a Markov Decision Process (MDP) to make it compatible with most existing RL methods.
arXiv Detail & Related papers (2024-10-15T01:39:28Z) - CompeteSMoE -- Effective Training of Sparse Mixture of Experts via
Competition [52.2034494666179]
Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width.
We propose a competition mechanism to address this fundamental challenge of representation collapse.
By routing inputs only to experts with the highest neural response, we show that, under mild assumptions, competition enjoys the same convergence rate as the optimal estimator.
arXiv Detail & Related papers (2024-02-04T15:17:09Z) - Interpretable Deep Reinforcement Learning for Optimizing Heterogeneous
Energy Storage Systems [11.03157076666012]
Energy storage systems (ESS) are pivotal component in the energy market, serving as both energy suppliers and consumers.
To enhance ESS flexibility within the energy market, a heterogeneous photovoltaic-ESS (PV-ESS) is proposed.
We develop a comprehensive cost function that takes into account degradation, capital, and operation/maintenance costs to reflect real-world scenarios.
arXiv Detail & Related papers (2023-10-20T02:26:17Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Prospect Theory-inspired Automated P2P Energy Trading with
Q-learning-based Dynamic Pricing [2.2463154358632473]
In this paper, we design an automated P2P energy market that takes user perception into account.
We introduce a risk-sensitive Q-learning mechanism named Q-b Pricing and Risk-sensitivity (PQR), which learns the optimal price for sellers considering their perceived utility.
Results based on real traces of energy consumption and production, as well as realistic prospect theory functions, show that our approach achieves a 26% higher perceived value for buyers.
arXiv Detail & Related papers (2022-08-26T16:45:40Z) - Data-Driven Online Interactive Bidding Strategy for Demand Response [0.30586855806896046]
Demand response (DR) provides the services of peak shaving, enhancing the efficiency of renewable energy utilization with a short response period, and low cost.
Various categories of DR are established, e.g. automated DR, incentive DR, emergency DR, and demand bidding.
This paper determines the bidding and purchasing strategy simultaneously employing the smart meter data and functions.
The results prove that when facing diverse situations the proposed model can earn the optimal profit via off/online learning the bidding rules and robustly making the proper bid.
arXiv Detail & Related papers (2022-02-09T02:44:20Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - A Learning-based Optimal Market Bidding Strategy for Price-Maker Energy
Storage [3.0839245814393728]
We implement an online Supervised Actor-Critic (SAC) algorithm supervised with a model-based controller -- Model Predictive Control (MPC)
The energy storage agent is trained with this algorithm to optimally bid while learning and adjusting to its impact on the market clearing prices.
Our contribution, thus, is an online and safe SAC algorithm that outperforms the current model-based state-of-the-art.
arXiv Detail & Related papers (2021-06-04T10:22:58Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.