Renewable energy integration and microgrid energy trading using
multi-agent deep reinforcement learning
- URL: http://arxiv.org/abs/2111.10898v1
- Date: Sun, 21 Nov 2021 21:11:00 GMT
- Title: Renewable energy integration and microgrid energy trading using
multi-agent deep reinforcement learning
- Authors: Daniel J. B. Harrold, Jun Cao, Zhong Fan
- Abstract summary: Multi-agent reinforcement learning is used to control a hybrid energy storage system.
Agents learn to control three different types of energy storage system suited for short, medium, and long-term storage.
Being able to trade with the other microgrids, rather than just selling back to the utility grid, was found to greatly increase the grid's savings.
- Score: 2.0427610089943387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, multi-agent reinforcement learning is used to control a hybrid
energy storage system working collaboratively to reduce the energy costs of a
microgrid through maximising the value of renewable energy and trading. The
agents must learn to control three different types of energy storage system
suited for short, medium, and long-term storage under fluctuating demand,
dynamic wholesale energy prices, and unpredictable renewable energy generation.
Two case studies are considered: the first looking at how the energy storage
systems can better integrate renewable energy generation under dynamic pricing,
and the second with how those same agents can be used alongside an aggregator
agent to sell energy to self-interested external microgrids looking to reduce
their own energy bills. This work found that the centralised learning with
decentralised execution of the multi-agent deep deterministic policy gradient
and its state-of-the-art variants allowed the multi-agent methods to perform
significantly better than the control from a single global agent. It was also
found that using separate reward functions in the multi-agent approach
performed much better than using a single control agent. Being able to trade
with the other microgrids, rather than just selling back to the utility grid,
also was found to greatly increase the grid's savings.
Related papers
- Peer-to-Peer Energy Trading of Solar and Energy Storage: A Networked Multiagent Reinforcement Learning Approach [5.671124014371425]
We propose multi-agent reinforcement learning (MARL) frameworks to help automate consumers' bidding and management of their solar PV and energy storage resources.
We show how the MARL frameworks can integrate physical network constraints to realize voltage control, hence ensuring physical feasibility of the P2P energy trading.
arXiv Detail & Related papers (2024-01-25T05:05:55Z) - MAHTM: A Multi-Agent Framework for Hierarchical Transactive Microgrids [0.0]
This paper proposes a multi-agent reinforcement learning framework for managing energy transactions in microgrids.
It seeks to optimize the usage of available resources by minimizing the carbon footprint while benefiting all stakeholders.
arXiv Detail & Related papers (2023-03-15T08:42:48Z) - Combating Uncertainties in Wind and Distributed PV Energy Sources Using
Integrated Reinforcement Learning and Time-Series Forecasting [2.774390661064003]
unpredictability of renewable energy generation poses challenges for electricity providers and utility companies.
We propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm.
arXiv Detail & Related papers (2023-02-27T19:12:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Energy Pricing in P2P Energy Systems Using Reinforcement Learning [36.244907785240876]
The increase in renewable energy on the consumer side gives place to new dynamics in the energy grids.
In such a scenario, the nature of distributed renewable energy generators and energy consumption increases the complexity of defining fair prices for buying and selling energy.
We introduce a reinforcement learning framework to help solve this issue by training an agent to set the prices that maximize the profit of all components in the microgrid.
arXiv Detail & Related papers (2022-10-24T19:21:10Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z) - A Hierarchical Approach to Multi-Energy Demand Response: From
Electricity to Multi-Energy Applications [1.5084441395740482]
This paper looks into an opportunity to control energy consumption of an aggregation of many residential, commercial and industrial consumers.
This ensemble control becomes a modern demand response contributor to the set of modeling tools for multi-energy infrastructure systems.
arXiv Detail & Related papers (2020-05-05T17:17:51Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.