EnergAIze: Multi Agent Deep Deterministic Policy Gradient for Vehicle to Grid Energy Management
- URL: http://arxiv.org/abs/2404.02361v2
- Date: Tue, 9 Apr 2024 16:32:22 GMT
- Title: EnergAIze: Multi Agent Deep Deterministic Policy Gradient for Vehicle to Grid Energy Management
- Authors: Tiago Fonseca, Luis Ferreira, Bernardo Cabral, Ricardo Severino, Isabel Praca,
- Abstract summary: This paper introduces EnergAIze, a Multi-Agent Reinforcement Learning (MARL) energy management framework.
It enables user-centric and multi-objective energy management by allowing each prosumer to select from a range of personal management objectives.
The efficacy of EnergAIze was evaluated through case studies employing the CityLearn simulation framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the increasing roles of Renewable Energy Sources (RES) and Electric Vehicles (EVs). While indicating a new era of sustainable energy, these also introduce complex challenges, including the need to balance supply and demand and smooth peak consumptions amidst rising EV adoption rates. Addressing these challenges requires innovative solutions such as Demand Response (DR), energy flexibility management, Renewable Energy Communities (RECs), and more specifically for EVs, Vehicle-to-Grid (V2G). However, existing V2G approaches often fall short in real-world adaptability, global REC optimization with other flexible assets, scalability, and user engagement. To bridge this gap, this paper introduces EnergAIze, a Multi-Agent Reinforcement Learning (MARL) energy management framework, leveraging the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. EnergAIze enables user-centric and multi-objective energy management by allowing each prosumer to select from a range of personal management objectives, thus encouraging engagement. Additionally, it architects' data protection and ownership through decentralized computing, where each prosumer can situate an energy management optimization node directly at their own dwelling. The local node not only manages local energy assets but also fosters REC wide optimization. The efficacy of EnergAIze was evaluated through case studies employing the CityLearn simulation framework. These simulations were instrumental in demonstrating EnergAIze's adeptness at implementing V2G technology within a REC and other energy assets. The results show reduction in peak loads, ramping, carbon emissions, and electricity costs at the REC level while optimizing for individual prosumers objectives.
Related papers
- CityLearn v2: Energy-flexible, resilient, occupant-centric, and carbon-aware management of grid-interactive communities [8.658740257657564]
CityLearn provides an environment for benchmarking simple and advanced distributed energy resource control algorithms.
This work details the v2 environment design and provides application examples that utilize reinforcement learning to manage battery energy storage system charging/discharging cycles, vehicle-to-grid control, and thermal comfort during heat pump power modulation.
arXiv Detail & Related papers (2024-05-02T16:31:09Z) - An Online Hierarchical Energy Management System for Energy Communities,
Complying with the Current Technical Legislation Framework [4.4269011841945085]
In 2018, the European Union (EU) defined the Renewable Energy Community (REC) as a local electrical grid whose participants share their self-produced renewable energy.
Since a REC is technically an SG, the strategies above can be applied, and specifically, practical Energy Management Systems (EMSs) are required.
In this work, an online Hierarchical EMS (HEMS) is synthesized for REC cost minimization to evaluate its superiority over a local self-consumption approach.
arXiv Detail & Related papers (2024-01-22T15:29:54Z) - Deep Reinforcement Learning-Based Battery Conditioning Hierarchical V2G
Coordination for Multi-Stakeholder Benefits [3.4529246211079645]
This study proposes a multi-stakeholder hierarchical V2G coordination based on deep reinforcement learning (DRL) and the Proof of Stake algorithm.
The multi-stakeholders include the power grid, EV aggregators (EVAs), and users, and the proposed strategy can achieve multi-stakeholder benefits.
arXiv Detail & Related papers (2023-08-01T01:19:56Z) - Combating Uncertainties in Wind and Distributed PV Energy Sources Using
Integrated Reinforcement Learning and Time-Series Forecasting [2.774390661064003]
unpredictability of renewable energy generation poses challenges for electricity providers and utility companies.
We propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm.
arXiv Detail & Related papers (2023-02-27T19:12:50Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - A Multi-Agent Deep Reinforcement Learning Approach for a Distributed
Energy Marketplace in Smart Grids [58.666456917115056]
This paper presents a Reinforcement Learning based energy market for a prosumer dominated microgrid.
The proposed market model facilitates a real-time and demanddependent dynamic pricing environment, which reduces grid costs and improves the economic benefits for prosumers.
arXiv Detail & Related papers (2020-09-23T02:17:51Z) - Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A
Multi-Agent Deep Reinforcement Learning Approach [82.6692222294594]
We study a risk-aware energy scheduling problem for a microgrid-powered MEC network.
We derive the solution by applying a multi-agent deep reinforcement learning (MADRL)-based advantage actor-critic (A3C) algorithm with shared neural networks.
arXiv Detail & Related papers (2020-02-21T02:14:38Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.