Reinforcement Learning Enabled Peer-to-Peer Energy Trading for Dairy Farms
- URL: http://arxiv.org/abs/2405.12716v1
- Date: Tue, 21 May 2024 12:19:17 GMT
- Title: Reinforcement Learning Enabled Peer-to-Peer Energy Trading for Dairy Farms
- Authors: Mian Ibad Ali Shah, Enda Barrett, Karl Mason,
- Abstract summary: This study aims to decrease dairy farms' dependence on traditional electricity grids by enabling the sale of surplus renewable energy in Peer-to-Peer markets.
The Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES) has been developed, providing a platform to experiment with Reinforcement Learning techniques.
The simulations demonstrate significant cost savings, including a 43% reduction in electricity expenses, a 42% decrease in peak demand, and a 1.91% increase in energy sales.
- Score: 1.2289361708127877
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Farm businesses are increasingly adopting renewables to enhance energy efficiency and reduce reliance on fossil fuels and the grid. This shift aims to decrease dairy farms' dependence on traditional electricity grids by enabling the sale of surplus renewable energy in Peer-to-Peer markets. However, the dynamic nature of farm communities poses challenges, requiring specialized algorithms for P2P energy trading. To address this, the Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES) has been developed, providing a platform to experiment with Reinforcement Learning techniques. The simulations demonstrate significant cost savings, including a 43% reduction in electricity expenses, a 42% decrease in peak demand, and a 1.91% increase in energy sales compared to baseline scenarios lacking peer-to-peer energy trading or renewable energy sources.
Related papers
- A Deep Reinforcement Learning Approach to Battery Management in Dairy Farming via Proximal Policy Optimization [1.2289361708127877]
This research investigates the application of Proximal Policy Optimization to enhance dairy farming battery management.
We evaluate the algorithm's effectiveness based on its ability to reduce reliance on the electricity grid.
arXiv Detail & Related papers (2024-07-01T12:46:09Z) - A Reinforcement Learning Approach to Dairy Farm Battery Management using Q Learning [3.1498833540989413]
This study proposes a Q-learning-based algorithm for scheduling battery charging and discharging in a dairy farm setting.
The proposed algorithm reduces the cost of imported electricity from the grid by 13.41%, peak demand by 2%, and 24.49% when utilizing wind generation.
arXiv Detail & Related papers (2024-03-14T15:42:26Z) - Peer-to-Peer Energy Trading of Solar and Energy Storage: A Networked Multiagent Reinforcement Learning Approach [5.671124014371425]
We propose multi-agent reinforcement learning (MARL) frameworks to help automate consumers' bidding and management of their solar PV and energy storage resources.
We show how the MARL frameworks can integrate physical network constraints to realize voltage control, hence ensuring physical feasibility of the P2P energy trading.
arXiv Detail & Related papers (2024-01-25T05:05:55Z) - A Multi-Agent Systems Approach for Peer-to-Peer Energy Trading in Dairy
Farming [3.441021278275805]
We propose the Multi-Agent Peer-to-Peer Dairy Farm Energy Simulator (MAPDES) to enable dairy farms to participate in peer-to-peer markets.
Our strategy reduces electricity costs and peak demand by approximately 30% and 24% respectively, while increasing energy sales by 37% compared to the baseline scenario.
arXiv Detail & Related papers (2023-08-21T13:22:20Z) - Deep Reinforcement Learning for Wind and Energy Storage Coordination in
Wholesale Energy and Ancillary Service Markets [5.1888966391612605]
Wind curtailment can be reduced using battery energy storage systems (BESS) as onsite backup sources.
We propose a novel deep reinforcement learning-based approach that decouples the system's market participation into two related Markov decision processes.
Our results show that joint-market bidding can significantly improve the financial performance of wind-battery systems.
arXiv Detail & Related papers (2022-12-27T05:51:54Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Exploring market power using deep reinforcement learning for intelligent
bidding strategies [69.3939291118954]
We find that capacity has an impact on the average electricity price in a single year.
The value of $sim$25% and $sim$11% may vary between market structures and countries.
We observe that the use of a market cap of approximately double the average market price has the effect of significantly decreasing this effect and maintaining a competitive market.
arXiv Detail & Related papers (2020-11-08T21:07:42Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Demand Responsive Dynamic Pricing Framework for Prosumer Dominated
Microgrids using Multiagent Reinforcement Learning [59.28219519916883]
This paper proposes a new multiagent Reinforcement Learning based decision-making environment for implementing a Real-Time Pricing (RTP) DR technique in a prosumer dominated microgrid.
The proposed technique addresses several shortcomings common to traditional DR methods and provides significant economic benefits to the grid operator and prosumers.
arXiv Detail & Related papers (2020-09-23T01:44:57Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.