Peer-to-Peer Energy Trading of Solar and Energy Storage: A Networked Multiagent Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2401.13947v3
- Date: Wed, 09 Oct 2024 04:57:47 GMT
- Title: Peer-to-Peer Energy Trading of Solar and Energy Storage: A Networked Multiagent Reinforcement Learning Approach
- Authors: Chen Feng, Andrew L. Liu,
- Abstract summary: We propose multi-agent reinforcement learning (MARL) frameworks to help automate consumers' bidding and management of their solar PV and energy storage resources.
We show how the MARL frameworks can integrate physical network constraints to realize voltage control, hence ensuring physical feasibility of the P2P energy trading.
- Score: 5.671124014371425
- License:
- Abstract: Utilizing distributed renewable and energy storage resources in local distribution networks via peer-to-peer (P2P) energy trading has long been touted as a solution to improve energy systems' resilience and sustainability. Consumers and prosumers (those who have energy generation resources), however, do not have the expertise to engage in repeated P2P trading, and the zero-marginal costs of renewables present challenges in determining fair market prices. To address these issues, we propose multi-agent reinforcement learning (MARL) frameworks to help automate consumers' bidding and management of their solar PV and energy storage resources, under a specific P2P clearing mechanism that utilizes the so-called supply-demand ratio. In addition, we show how the MARL frameworks can integrate physical network constraints to realize voltage control, hence ensuring physical feasibility of the P2P energy trading and paving way for real-world implementations.
Related papers
- Reinforcement Learning Enabled Peer-to-Peer Energy Trading for Dairy Farms [1.2289361708127877]
This study aims to decrease dairy farms' dependence on traditional electricity grids by enabling the sale of surplus renewable energy in Peer-to-Peer markets.
The Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES) has been developed, providing a platform to experiment with Reinforcement Learning techniques.
The simulations demonstrate significant cost savings, including a 43% reduction in electricity expenses, a 42% decrease in peak demand, and a 1.91% increase in energy sales.
arXiv Detail & Related papers (2024-05-21T12:19:17Z) - RAI4IoE: Responsible AI for Enabling the Internet of Energy [40.87183313830612]
This paper plans to develop an Equitable and Responsible AI framework with enabling techniques and algorithms for the Internet of Energy (IoE)
The vision of our project is to ensure equitable participation of the community members and responsible use of their data in IoE so that it could reap the benefits of advances in AI to provide safe, reliable and sustainable energy services.
arXiv Detail & Related papers (2023-09-20T23:45:54Z) - MAHTM: A Multi-Agent Framework for Hierarchical Transactive Microgrids [0.0]
This paper proposes a multi-agent reinforcement learning framework for managing energy transactions in microgrids.
It seeks to optimize the usage of available resources by minimizing the carbon footprint while benefiting all stakeholders.
arXiv Detail & Related papers (2023-03-15T08:42:48Z) - Combating Uncertainties in Wind and Distributed PV Energy Sources Using
Integrated Reinforcement Learning and Time-Series Forecasting [2.774390661064003]
unpredictability of renewable energy generation poses challenges for electricity providers and utility companies.
We propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm.
arXiv Detail & Related papers (2023-02-27T19:12:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Prospect Theory-inspired Automated P2P Energy Trading with
Q-learning-based Dynamic Pricing [2.2463154358632473]
In this paper, we design an automated P2P energy market that takes user perception into account.
We introduce a risk-sensitive Q-learning mechanism named Q-b Pricing and Risk-sensitivity (PQR), which learns the optimal price for sellers considering their perceived utility.
Results based on real traces of energy consumption and production, as well as realistic prospect theory functions, show that our approach achieves a 26% higher perceived value for buyers.
arXiv Detail & Related papers (2022-08-26T16:45:40Z) - Renewable energy integration and microgrid energy trading using
multi-agent deep reinforcement learning [2.0427610089943387]
Multi-agent reinforcement learning is used to control a hybrid energy storage system.
Agents learn to control three different types of energy storage system suited for short, medium, and long-term storage.
Being able to trade with the other microgrids, rather than just selling back to the utility grid, was found to greatly increase the grid's savings.
arXiv Detail & Related papers (2021-11-21T21:11:00Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Towards a Peer-to-Peer Energy Market: an Overview [68.8204255655161]
This work focuses on the electric power market, comparing the status quo with the recent trend towards the increase in distributed self-generation capabilities by prosumers.
We introduce a potential multi-layered architecture for a Peer-to-Peer (P2P) energy market, discussing the fundamental aspects of local production and local consumption as part of a microgrid.
To give a full picture to the reader, we also scrutinise relevant elements of energy trading, such as Smart Contract and grid stability.
arXiv Detail & Related papers (2020-03-02T20:32:10Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.