Control of Renewable Energy Communities using AI and Real-World Data
- URL: http://arxiv.org/abs/2505.17321v1
- Date: Thu, 22 May 2025 22:20:09 GMT
- Title: Control of Renewable Energy Communities using AI and Real-World Data
- Authors: Tiago Fonseca, Clarisse Sousa, Ricardo VenĂ¢ncio, Pedro Pires, Ricardo Severino, Paulo Rodrigues, Pedro Paiva, Luis Lino Ferreira,
- Abstract summary: This paper introduces a framework designed explicitly to handle these complexities and bridge the simulation to-reality gap.<n>It incorporates EnergAIze, a MADD-based multi-agent control strategy, and specifically addresses challenges related to real-world data collection, system integration, and user behavior modeling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The electrification of transportation and the increased adoption of decentralized renewable energy generation have added complexity to managing Renewable Energy Communities (RECs). Integrating Electric Vehicle (EV) charging with building energy systems like heating, ventilation, air conditioning (HVAC), photovoltaic (PV) generation, and battery storage presents significant opportunities but also practical challenges. Reinforcement learning (RL), particularly MultiAgent Deep Deterministic Policy Gradient (MADDPG) algorithms, have shown promising results in simulation, outperforming heuristic control strategies. However, translating these successes into real-world deployments faces substantial challenges, including incomplete and noisy data, integration of heterogeneous subsystems, synchronization issues, unpredictable occupant behavior, and missing critical EV state-of-charge (SoC) information. This paper introduces a framework designed explicitly to handle these complexities and bridge the simulation to-reality gap. The framework incorporates EnergAIze, a MADDPG-based multi-agent control strategy, and specifically addresses challenges related to real-world data collection, system integration, and user behavior modeling. Preliminary results collected from a real-world operational REC with four residential buildings demonstrate the practical feasibility of our approach, achieving an average 9% reduction in daily peak demand and a 5% decrease in energy costs through optimized load scheduling and EV charging behaviors. These outcomes underscore the framework's effectiveness, advancing the practical deployment of intelligent energy management solutions in RECs.
Related papers
- Data-Driven Policy Mapping for Safe RL-based Energy Management Systems [6.185645393091031]
We present a three-step reinforcement learning-based Building Energy Management System (BEMS) that combines clustering, forecasting, and constrained policy learning.<n> evaluated on real-world data, our approach reduces operating costs by up to 15% for certain building types.<n>Overall, this framework delivers scalable, robust, and cost-effective building energy management.
arXiv Detail & Related papers (2025-06-19T14:29:48Z) - Joint Resource Management for Energy-efficient UAV-assisted SWIPT-MEC: A Deep Reinforcement Learning Approach [50.52139512096988]
6G Internet of Things (IoT) networks face challenges in remote areas and disaster scenarios where ground infrastructure is unavailable.<n>This paper proposes a novel aerial unmanned vehicle (UAV)-assisted computing (MEC) system enhanced by directional antennas to provide both computational and energy support for ground edge terminals.
arXiv Detail & Related papers (2025-05-06T06:46:19Z) - H-FLTN: A Privacy-Preserving Hierarchical Framework for Electric Vehicle Spatio-Temporal Charge Prediction [8.183121832206556]
Electric Vehicles (EVs) pose critical challenges for energy providers, particularly in predicting charging time (temporal prediction)<n>This paper introduces the Hierarchical Learning Transformer Network framework to address these challenges.<n>Its integration into real-world smart city infrastructure enhances energy demand forecasting, resource allocation, and grid stability.
arXiv Detail & Related papers (2025-02-25T23:20:53Z) - Reinforcement Learning-based Approach for Vehicle-to-Building Charging with Heterogeneous Agents and Long Term Rewards [3.867907469895697]
We introduce a novel RL framework that combines the Deep Deterministic Policy Gradient approach with action masking and efficient MILP-driven policy guidance.<n>Our approach balances the exploration of continuous action spaces to meet user charging demands.<n>Our results show that the proposed approach is one of the first scalable and general approaches to solving the V2B energy management challenge.
arXiv Detail & Related papers (2025-02-24T19:24:41Z) - EnergAIze: Multi Agent Deep Deterministic Policy Gradient for Vehicle to Grid Energy Management [0.0]
This paper introduces EnergAIze, a Multi-Agent Reinforcement Learning (MARL) energy management framework.
It enables user-centric and multi-objective energy management by allowing each prosumer to select from a range of personal management objectives.
The efficacy of EnergAIze was evaluated through case studies employing the CityLearn simulation framework.
arXiv Detail & Related papers (2024-04-02T23:16:17Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.