Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning
- URL: http://arxiv.org/abs/2009.10890v1
- Date: Wed, 23 Sep 2020 01:44:57 GMT
- Title: Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning
- Authors: Amin Shojaeighadikolaei, Arman Ghasemi, Kailani R. Jones, Alexandru G.
Bardas, Morteza Hashemi, Reza Ahmadi
- Abstract summary: This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
- Score: 59.28219519916883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Demand Response (DR) has a widely recognized potential for improving grid
stability and reliability while reducing customers energy bills. However, the
conventional DR techniques come with several shortcomings, such as inability to
handle operational uncertainties and incurring customer disutility, impeding
their wide spread adoption in real-world applications. This paper proposes a
new multiagent Reinforcement Learning (RL) based decision-making environment
for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated
microgrid. The proposed technique addresses several shortcomings common to
traditional DR methods and provides significant economic benefits to the grid
operator and prosumers. To show its better efficacy, the proposed DR method is
compared to a baseline traditional operation scenario in a small-scale
microgrid system. Finally, investigations on the use of prosumers energy
storage capacity in this microgrid highlight the advantages of the proposed
method in establishing a balanced market setup.
Related papers
- Interpretable Deep Reinforcement Learning for Optimizing Heterogeneous
Energy Storage Systems [11.03157076666012]
Energy storage systems (ESS) are pivotal component in the energy market, serving as both energy suppliers and consumers.
To enhance ESS flexibility within the energy market, a heterogeneous photovoltaic-ESS (PV-ESS) is proposed.
We develop a comprehensive cost function that takes into account degradation, capital, and operation/maintenance costs to reflect real-world scenarios.
arXiv Detail & Related papers (2023-10-20T02:26:17Z) - Multiagent Reinforcement Learning with an Attention Mechanism for
Improving Energy Efficiency in LoRa Networks [52.96907334080273]
As the network scale increases, the energy efficiency of LoRa networks decreases sharply due to severe packet collisions.
We propose a transmission parameter allocation algorithm based on multiagent reinforcement learning (MALoRa)
Simulation results demonstrate that MALoRa significantly improves the system EE compared with baseline algorithms.
arXiv Detail & Related papers (2023-09-16T11:37:23Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - ROI Constrained Bidding via Curriculum-Guided Bayesian Reinforcement
Learning [34.82004227655201]
We specialize in ROI-Constrained Bidding in non-stationary markets.
Based on a Partially Observable Constrained Markov Decision Process, we propose the first hard barrier solution to accommodate non-monotonic constraints.
Our method exploits a parameter-free indicator-augmented reward function and develops a Curriculum-Guided Bayesian Reinforcement Learning framework.
arXiv Detail & Related papers (2022-06-10T17:30:12Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Delayed Q-update: A novel credit assignment technique for deriving an
optimal operation policy for the Grid-Connected Microgrid [3.3754780158324564]
We propose an approach for deriving a desirable microgrid operation policy using the proposed novel credit assignment technique, delayed-Q update.
The technique employs novel features such as the ability to tackle and resolve the delayed effective property of the microgrid.
It supports the search for a near-optimal operation policy under a sophisticatedly controlled microgrid environment.
arXiv Detail & Related papers (2020-06-30T10:30:15Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.