Decentralized Coordination of Distributed Energy Resources through Local Energy Markets and Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2404.13142v1
- Date: Fri, 19 Apr 2024 19:03:33 GMT
- Title: Decentralized Coordination of Distributed Energy Resources through Local Energy Markets and Deep Reinforcement Learning
- Authors: Daniel May, Matthew Taylor, Petr Musilek,
- Abstract summary: Transactive energy, implemented through local energy markets, has recently garnered attention as a promising solution to the grid challenges.
This study addresses the gap by training a set of deep reinforcement learning agents to automate end-user participation in ALEX.
The study unveils a clear correlation between bill reduction and reduced net load variability in this setup.
- Score: 1.8434042562191815
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As the energy landscape evolves toward sustainability, the accelerating integration of distributed energy resources poses challenges to the operability and reliability of the electricity grid. One significant aspect of this issue is the notable increase in net load variability at the grid edge. Transactive energy, implemented through local energy markets, has recently garnered attention as a promising solution to address the grid challenges in the form of decentralized, indirect demand response on a community level. Given the nature of these challenges, model-free control approaches, such as deep reinforcement learning, show promise for the decentralized automation of participation within this context. Existing studies at the intersection of transactive energy and model-free control primarily focus on socioeconomic and self-consumption metrics, overlooking the crucial goal of reducing community-level net load variability. This study addresses this gap by training a set of deep reinforcement learning agents to automate end-user participation in ALEX, an economy-driven local energy market. In this setting, agents do not share information and only prioritize individual bill optimization. The study unveils a clear correlation between bill reduction and reduced net load variability in this setup. The impact on net load variability is assessed over various time horizons using metrics such as ramping rate, daily and monthly load factor, as well as daily average and total peak export and import on an open-source dataset. Agents are then benchmarked against several baselines, with their performance levels showing promising results, approaching those of a near-optimal dynamic programming benchmark.
Related papers
- Deep Reinforcement Learning for Real-Time Green Energy Integration in Data Centers [0.0]
This paper explores the implementation of a Deep Reinforcement Learning (DRL)-optimized energy management system for e-commerce data centers.<n>The proposed system dynamically manages the integration of renewable energy sources, energy storage, and grid power, adapting to fluctuating energy availability in real time.
arXiv Detail & Related papers (2025-07-24T00:59:56Z) - Multi-Agent Reinforcement Learning for Sample-Efficient Deep Neural Network Mapping [54.65536245955678]
We present a decentralized multi-agent reinforcement learning (MARL) framework designed to overcome the challenge of sample inefficiency.<n>We introduce an agent clustering algorithm that assigns similar mapping parameters to the same agents based on correlation analysis.<n> Experimental results show our MARL approach improves sample efficiency by 30-300x over standard single-agent RL.
arXiv Detail & Related papers (2025-07-22T05:51:07Z) - Centralized vs. Decentralized Multi-Agent Reinforcement Learning for Enhanced Control of Electric Vehicle Charging Networks [1.9188272016043582]
We introduce a novel approach for distributed and cooperative charging strategy using a Multi-Agent Reinforcement Learning (MARL) framework.
Our method is built upon the Deep Deterministic Policy Gradient (DDPG) algorithm for a group of EVs in a residential community.
Our results indicate that, despite higher policy variances and training complexity, the CTDE-DDPG framework significantly improves charging efficiency by reducing total variation by approximately %36 and charging cost by around %9.1 on average.
arXiv Detail & Related papers (2024-04-18T21:50:03Z) - Deep Reinforcement Learning for Community Battery Scheduling under
Uncertainties of Load, PV Generation, and Energy Prices [5.694872363688119]
This paper presents a deep reinforcement learning (RL) strategy to schedule a community battery system in the presence of uncertainties.
We position the community battery to play a versatile role, in integrating local PV energy, reducing peak load, and exploiting energy price fluctuations for arbitrage.
arXiv Detail & Related papers (2023-12-04T13:45:17Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Optimal Planning of Hybrid Energy Storage Systems using Curtailed
Renewable Energy through Deep Reinforcement Learning [0.0]
We propose a sophisticated deep reinforcement learning (DRL) methodology with a policy-based algorithm to plan energy storage systems (ESS)
A quantitative performance comparison proved that the DRL agent outperforms the scenario-based optimization (SO) algorithm.
The corresponding results confirmed that the DRL agent learns the way like what a human expert would do, suggesting reliable application of the proposed methodology.
arXiv Detail & Related papers (2022-12-12T02:24:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Continuous Multiagent Control using Collective Behavior Entropy for
Large-Scale Home Energy Management [36.82414045535202]
We propose a collective MA-DRL algorithm with continuous action space to provide fine-grained control on a large scale microgrid.
Our approach significantly outperforms the state-of-the-art methods regarding power cost reduction and daily peak loads optimization.
arXiv Detail & Related papers (2020-05-14T16:07:55Z) - Demand-Side Scheduling Based on Multi-Agent Deep Actor-Critic Learning
for Smart Grids [56.35173057183362]
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online.
The goal is to minimize the overall cost under a real-time pricing scheme.
We propose the formulation of a smart grid environment as a Markov game.
arXiv Detail & Related papers (2020-05-05T07:32:40Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.