Reinforcement Learning for Efficient Design and Control Co-optimisation of Energy Systems
- URL: http://arxiv.org/abs/2406.19825v1
- Date: Fri, 28 Jun 2024 11:01:02 GMT
- Title: Reinforcement Learning for Efficient Design and Control Co-optimisation of Energy Systems
- Authors: Marine Cauz, Adrien Bolland, Nicolas Wyrsch, Christophe Ballif,
- Abstract summary: This study introduces a novel reinforcement learning framework tailored for the co-optimisation of design and control in energy systems.
By leveraging RL's model-free capabilities, the framework eliminates the need for explicit system modelling.
This contribution paves the way for advanced RL applications in energy management, leading to more efficient and effective use of renewable energy sources.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ongoing energy transition drives the development of decentralised renewable energy sources, which are heterogeneous and weather-dependent, complicating their integration into energy systems. This study tackles this issue by introducing a novel reinforcement learning (RL) framework tailored for the co-optimisation of design and control in energy systems. Traditionally, the integration of renewable sources in the energy sector has relied on complex mathematical modelling and sequential processes. By leveraging RL's model-free capabilities, the framework eliminates the need for explicit system modelling. By optimising both control and design policies jointly, the framework enhances the integration of renewable sources and improves system efficiency. This contribution paves the way for advanced RL applications in energy management, leading to more efficient and effective use of renewable energy sources.
Related papers
- Data-driven modeling and supervisory control system optimization for plug-in hybrid electric vehicles [16.348774515562678]
Learning-based intelligent energy management systems for plug-in hybrid electric vehicles (PHEVs) are crucial for achieving efficient energy utilization.
Their application faces system reliability challenges in the real world, which prevents widespread acceptance by original equipment manufacturers (OEMs)
This paper proposes a real-vehicle application-oriented control framework, combining horizon-extended reinforcement learning (RL)-based energy management with the equivalent consumption minimization strategy (ECMS) to enhance practical applicability.
arXiv Detail & Related papers (2024-06-13T13:04:42Z) - Empowering Distributed Solutions in Renewable Energy Systems and Grid
Optimization [3.8979646385036175]
Machine learning (ML) advancements play a crucial role in empowering renewable energy sources and improving grid management.
The incorporation of big data and ML into smart grids offers several advantages, including heightened energy efficiency.
However, challenges like handling large data volumes, ensuring cybersecurity, and obtaining specialized expertise must be addressed.
arXiv Detail & Related papers (2023-10-24T02:45:16Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Deep Reinforcement Learning-driven Cross-Community Energy Interaction
Optimal Scheduling [15.410849325499017]
This paper proposes a comprehensive scheduling model that utilizes a multi-agent deep reinforcement learning algorithm to learn load characteristics of different communities.
It leads to a reduction in wind curtailment rate from 16.3% to 0% and lowers the overall operating cost by 5445.6 Yuan.
arXiv Detail & Related papers (2023-08-24T04:42:18Z) - On Feature Diversity in Energy-based Models [98.78384185493624]
An energy-based model (EBM) is typically formed of inner-model(s) that learn a combination of the different features to generate an energy mapping for each input configuration.
We extend the probably approximately correct (PAC) theory of EBMs and analyze the effect of redundancy reduction on the performance of EBMs.
arXiv Detail & Related papers (2023-06-02T12:30:42Z) - Combating Uncertainties in Wind and Distributed PV Energy Sources Using
Integrated Reinforcement Learning and Time-Series Forecasting [2.774390661064003]
unpredictability of renewable energy generation poses challenges for electricity providers and utility companies.
We propose a novel framework with two objectives: (i) combating uncertainty of renewable energy in smart grid by leveraging time-series forecasting with Long-Short Term Memory (LSTM) solutions, and (ii) establishing distributed and dynamic decision-making framework with multi-agent reinforcement learning using Deep Deterministic Policy Gradient (DDPG) algorithm.
arXiv Detail & Related papers (2023-02-27T19:12:50Z) - GP CC-OPF: Gaussian Process based optimization tool for
Chance-Constrained Optimal Power Flow [54.94701604030199]
The Gaussian Process (GP) based Chance-Constrained Optimal Flow (CC-OPF) is an open-source Python code for economic dispatch (ED) problem in power grids.
The developed tool presents a novel data-driven approach based on the CC-OP model for solving the large regression problem with a trade-off between complexity and accuracy.
arXiv Detail & Related papers (2023-02-16T17:59:06Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Battery and Hydrogen Energy Storage Control in a Smart Energy Network
with Flexible Energy Demand using Deep Reinforcement Learning [2.5666730153464465]
We introduce a hybrid energy storage system composed of battery and hydrogen energy storage.
We propose a deep reinforcement learning-based control strategy to optimise the scheduling of the hybrid energy storage system and energy demand in real-time.
arXiv Detail & Related papers (2022-08-26T16:47:48Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.