TreeC: a method to generate interpretable energy management systems
using a metaheuristic algorithm
- URL: http://arxiv.org/abs/2304.08310v1
- Date: Mon, 17 Apr 2023 14:27:19 GMT
- Title: TreeC: a method to generate interpretable energy management systems
using a metaheuristic algorithm
- Authors: Julian Ruddick, Luis Ramirez Camargo, Muhammad Andy Putratama, Maarten
Messagie, Thierry Coosemans
- Abstract summary: Energy management systems (EMS) have classically been implemented based on rule-based control (RBC) and model predictive control (MPC) methods.
Recent research are investigating reinforcement learning (RL) as a new promising approach.
This paper introduces TreeC, a machine learning method that generates an interpretable EMS modeled as a decision tree.
- Score: 0.9449650062296824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Energy management systems (EMS) have classically been implemented based on
rule-based control (RBC) and model predictive control (MPC) methods. Recent
research are investigating reinforcement learning (RL) as a new promising
approach. This paper introduces TreeC, a machine learning method that uses the
metaheuristic algorithm covariance matrix adaptation evolution strategy
(CMA-ES) to generate an interpretable EMS modeled as a decision tree. This
method learns the decision strategy of the EMS based on historical data
contrary to RBC and MPC approaches that are typically considered as non
adaptive solutions. The decision strategy of the EMS is modeled as a decision
tree and is thus interpretable contrary to RL which mainly uses black-box
models (e.g. neural networks). The TreeC method is compared to RBC, MPC and RL
strategies in two study cases taken from literature: (1) an electric grid case
and (2) a household heating case. The results show that TreeC obtains close
performances than MPC with perfect forecast in both cases and obtains similar
performances to RL in the electrical grid case and outperforms RL in the
household heating case. TreeC demonstrates a performant application of machine
learning for energy management systems that is also fully interpretable.
Related papers
- Comparison of Model Predictive Control and Proximal Policy Optimization for a 1-DOF Helicopter System [0.7499722271664147]
This study conducts a comparative analysis of Model Predictive Control (MPC) and Proximal Policy Optimization (PPO), a Deep Reinforcement Learning (DRL) algorithm, applied to a Quanser Aero 2 system.
PPO excels in rise-time and adaptability, making it a promising approach for applications requiring rapid response and adaptability.
arXiv Detail & Related papers (2024-08-28T08:35:34Z) - Empirical Analysis of AI-based Energy Management in Electric Vehicles: A
Case Study on Reinforcement Learning [9.65075615023066]
Reinforcement learning-based (RL-based) energy management strategy (EMS) is considered a promising solution for the energy management of electric vehicles with multiple power sources.
This paper presents an empirical analysis of RL-based EMS in a Plug-in Hybrid Electric Vehicle (PHEV) and Fuel Cell Electric Vehicle (FCEV)
arXiv Detail & Related papers (2022-12-18T20:12:20Z) - Progress and summary of reinforcement learning on energy management of
MPS-EV [4.0629930354376755]
The energy management strategy (EMS) is a critical technology for MPS-EVs to maximize efficiency, fuel economy, and range.
This paper presents an in-depth analysis of the current research on RL-based EMS and summarizes the design elements of RL-based EMS.
arXiv Detail & Related papers (2022-11-08T04:49:32Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning [92.18524491615548]
Contrastive self-supervised learning has been successfully integrated into the practice of (deep) reinforcement learning (RL)
We study how RL can be empowered by contrastive learning in a class of Markov decision processes (MDPs) and Markov games (MGs) with low-rank transitions.
Under the online setting, we propose novel upper confidence bound (UCB)-type algorithms that incorporate such a contrastive loss with online RL algorithms for MDPs or MGs.
arXiv Detail & Related papers (2022-07-29T17:29:08Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Does Explicit Prediction Matter in Energy Management Based on Deep
Reinforcement Learning? [2.82357668338266]
We present the standard DRL-based energy management scheme with and without prediction.
The simulation results demonstrate that the energy management scheme without prediction is superior over the scheme with prediction.
This work intends to rectify the misuse of DRL methods in the field of energy management.
arXiv Detail & Related papers (2021-08-11T08:52:42Z) - Model-predictive control and reinforcement learning in multi-energy
system case studies [0.2810625954925815]
We present an on-objective and off-policy multi- reinforcement learning (RL) approach against a linear model-predictive-control (LMPC)
We show that a twin delayed deep deterministic policy gradient (TD3) RL agent offers potential to match and outperform the perfect foresight LMPC benchmark (101.5%)
While in a more complex MES system configuration, the RL agent's performance is generally lower (94.6%), yet still better than the realistic LMPC (88.9%)
arXiv Detail & Related papers (2021-04-20T06:51:50Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.