Design and Planning of Flexible Mobile Micro-Grids Using Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2212.04136v1
- Date: Thu, 8 Dec 2022 08:30:50 GMT
- Title: Design and Planning of Flexible Mobile Micro-Grids Using Deep
Reinforcement Learning
- Authors: Cesare Caputo (Imperial College London), Michel-Alexandre Cardin
(Imperial College London), Pudong Ge (Imperial College London), Fei Teng
(Imperial College London), Anna Korre (Imperial College London), Ehecatl
Antonio del Rio Chanona (Imperial College London)
- Abstract summary: The design and planning strategy of a mobile multi-energy supply system for a nomadic community is investigated.
Deep Reinforcement Learning is implemented for the design and planning problem tackled.
The results on a case study for ger communities in Mongolia suggest that mobile nomadic energy systems can be both technically and economically feasible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ongoing risks from climate change have impacted the livelihood of global
nomadic communities, and are likely to lead to increased migratory movements in
coming years. As a result, mobility considerations are becoming increasingly
important in energy systems planning, particularly to achieve energy access in
developing countries. Advanced Plug and Play control strategies have been
recently developed with such a decentralized framework in mind, more easily
allowing for the interconnection of nomadic communities, both to each other and
to the main grid. In light of the above, the design and planning strategy of a
mobile multi-energy supply system for a nomadic community is investigated in
this work. Motivated by the scale and dimensionality of the associated
uncertainties, impacting all major design and decision variables over the
30-year planning horizon, Deep Reinforcement Learning (DRL) is implemented for
the design and planning problem tackled. DRL based solutions are benchmarked
against several rigid baseline design options to compare expected performance
under uncertainty. The results on a case study for ger communities in Mongolia
suggest that mobile nomadic energy systems can be both technically and
economically feasible, particularly when considering flexibility, although the
degree of spatial dispersion among households is an important limiting factor.
Key economic, sustainability and resilience indicators such as Cost, Equivalent
Emissions and Total Unmet Load are measured, suggesting potential improvements
compared to available baselines of up to 25%, 67% and 76%, respectively.
Finally, the decomposition of values of flexibility and plug and play operation
is presented using a variation of real options theory, with important
implications for both nomadic communities and policymakers focused on enabling
their energy access.
Related papers
- CityLearn v2: Energy-flexible, resilient, occupant-centric, and carbon-aware management of grid-interactive communities [8.658740257657564]
CityLearn provides an environment for benchmarking simple and advanced distributed energy resource control algorithms.
This work details the v2 environment design and provides application examples that utilize reinforcement learning to manage battery energy storage system charging/discharging cycles, vehicle-to-grid control, and thermal comfort during heat pump power modulation.
arXiv Detail & Related papers (2024-05-02T16:31:09Z) - Decentralized Coordination of Distributed Energy Resources through Local Energy Markets and Deep Reinforcement Learning [1.8434042562191815]
Transactive energy, implemented through local energy markets, has recently garnered attention as a promising solution to the grid challenges.
This study addresses the gap by training a set of deep reinforcement learning agents to automate end-user participation in ALEX.
The study unveils a clear correlation between bill reduction and reduced net load variability in this setup.
arXiv Detail & Related papers (2024-04-19T19:03:33Z) - EnergAIze: Multi Agent Deep Deterministic Policy Gradient for Vehicle to Grid Energy Management [0.0]
This paper introduces EnergAIze, a Multi-Agent Reinforcement Learning (MARL) energy management framework.
It enables user-centric and multi-objective energy management by allowing each prosumer to select from a range of personal management objectives.
The efficacy of EnergAIze was evaluated through case studies employing the CityLearn simulation framework.
arXiv Detail & Related papers (2024-04-02T23:16:17Z) - Explainable Reinforcement Learning-based Home Energy Management Systems using Differentiable Decision Trees [4.573008040057806]
The residential sector is another major and largely untapped source of flexibility, driven by the increased adoption of solar PV, home batteries, and EVs.
We introduce a reinforcement learning-based approach using differentiable decision trees.
This approach integrates the scalability of data-driven reinforcement learning with the explainability of (differentiable) decision trees.
As a proof-of-concept, we analyze our method using a home energy management problem, comparing its performance with commercially available rule-based baseline and standard neural network-based RL controllers.
arXiv Detail & Related papers (2024-03-18T16:40:41Z) - A Safe Genetic Algorithm Approach for Energy Efficient Federated
Learning in Wireless Communication Networks [53.561797148529664]
Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner.
Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified.
The current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization.
arXiv Detail & Related papers (2023-06-25T13:10:38Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Empowering Prosumer Communities in Smart Grid with Wireless
Communications and Federated Edge Learning [5.289693272967054]
The exponential growth of distributed energy resources is enabling the transformation of traditional consumers in the smart grid into prosumers.
We propose a multi-level pro-decision framework for prosumer communities to achieve collective goals.
In addition to preserving prosumers' privacy, we show through evaluations that training prediction models using Federated Learning yields high accuracy for different energy resources.
arXiv Detail & Related papers (2021-04-07T14:57:57Z) - Investigating Underlying Drivers of Variability in Residential Energy
Usage Patterns with Daily Load Shape Clustering of Smart Meter Data [53.51471969978107]
Large-scale deployment of smart meters has motivated increasing studies to explore disaggregated daily load patterns.
This paper aims to shed light on the mechanisms by which electricity consumption patterns exhibit variability.
arXiv Detail & Related papers (2021-02-16T16:56:27Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - Data-driven control of micro-climate in buildings: an event-triggered
reinforcement learning approach [56.22460188003505]
We formulate the micro-climate control problem based on semi-Markov decision processes.
We propose two learning algorithms for event-triggered control of micro-climate in buildings.
We show the efficacy of our proposed approach via designing a smart learning thermostat.
arXiv Detail & Related papers (2020-01-28T18:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.