TreeC: a method to generate interpretable energy management systems
using a metaheuristic algorithm
- URL: http://arxiv.org/abs/2304.08310v1
- Date: Mon, 17 Apr 2023 14:27:19 GMT
- Title: TreeC: a method to generate interpretable energy management systems
using a metaheuristic algorithm
- Authors: Julian Ruddick, Luis Ramirez Camargo, Muhammad Andy Putratama, Maarten
Messagie, Thierry Coosemans
- Abstract summary: Energy management systems (EMS) have classically been implemented based on rule-based control (RBC) and model predictive control (MPC) methods.
Recent research are investigating reinforcement learning (RL) as a new promising approach.
This paper introduces TreeC, a machine learning method that generates an interpretable EMS modeled as a decision tree.
- Score: 0.9449650062296824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Energy management systems (EMS) have classically been implemented based on
rule-based control (RBC) and model predictive control (MPC) methods. Recent
research are investigating reinforcement learning (RL) as a new promising
approach. This paper introduces TreeC, a machine learning method that uses the
metaheuristic algorithm covariance matrix adaptation evolution strategy
(CMA-ES) to generate an interpretable EMS modeled as a decision tree. This
method learns the decision strategy of the EMS based on historical data
contrary to RBC and MPC approaches that are typically considered as non
adaptive solutions. The decision strategy of the EMS is modeled as a decision
tree and is thus interpretable contrary to RL which mainly uses black-box
models (e.g. neural networks). The TreeC method is compared to RBC, MPC and RL
strategies in two study cases taken from literature: (1) an electric grid case
and (2) a household heating case. The results show that TreeC obtains close
performances than MPC with perfect forecast in both cases and obtains similar
performances to RL in the electrical grid case and outperforms RL in the
household heating case. TreeC demonstrates a performant application of machine
learning for energy management systems that is also fully interpretable.
Related papers
- Real-world validation of safe reinforcement learning, model predictive control and decision tree-based home energy management systems [0.8480931990442769]
This paper presents the real-world validation of machine learning based energy management approaches.
The experiments were conducted on the electrical installation of 4 reproductions of residential houses.
arXiv Detail & Related papers (2024-08-14T10:12:15Z) - Robust Model Based Reinforcement Learning Using $\mathcal{L}_1$ Adaptive Control [4.88489286130994]
We introduce a control-theoretic augmentation scheme for Model-Based Reinforcement Learning (MBRL) algorithms.
MBRL algorithms learn a model of the transition function using data and use it to design a control input.
Our approach generates a series of approximate control-affine models of the learned transition function according to the proposed switching law.
arXiv Detail & Related papers (2024-03-21T22:15:09Z) - Go Beyond Black-box Policies: Rethinking the Design of Learning Agent
for Interpretable and Verifiable HVAC Control [3.326392645107372]
We overcome the bottleneck by redesigning HVAC controllers using decision trees extracted from thermal dynamics models and historical data.
Our method saves 68.4% more energy and increases human comfort gain by 14.8% compared to the state-of-the-art method.
arXiv Detail & Related papers (2024-02-29T22:42:23Z) - WARM: On the Benefits of Weight Averaged Reward Models [63.08179139233774]
We propose Weight Averaged Reward Models (WARM) to mitigate reward hacking.
Experiments on summarization tasks, using best-of-N and RL methods, shows that WARM improves the overall quality and alignment of LLM predictions.
arXiv Detail & Related papers (2024-01-22T18:27:08Z) - A Comparison of Model-Free and Model Predictive Control for Price
Responsive Water Heaters [7.579687492224987]
We present a comparison of two model-free control algorithms, with receding horizon model predictive control (MPC)
Four MPC variants are considered: a one-shot controller with perfect forecasting yielding optimal control; a limited-horizon controller with perfect forecasting; and a two-stage programming controller using historical scenarios.
We show that both ES and PPO learn good general purpose policies that outperform mean forecast and two-stage MPC controllers in terms of average cost and are more than two orders of magnitude faster at computing actions.
arXiv Detail & Related papers (2021-11-08T18:06:43Z) - Does Explicit Prediction Matter in Energy Management Based on Deep
Reinforcement Learning? [2.82357668338266]
We present the standard DRL-based energy management scheme with and without prediction.
The simulation results demonstrate that the energy management scheme without prediction is superior over the scheme with prediction.
This work intends to rectify the misuse of DRL methods in the field of energy management.
arXiv Detail & Related papers (2021-08-11T08:52:42Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Controlling Rayleigh-B\'enard convection via Reinforcement Learning [62.997667081978825]
The identification of effective control strategies to suppress or enhance the convective heat exchange under fixed external thermal gradients is an outstanding fundamental and technological issue.
In this work, we explore a novel approach, based on a state-of-the-art Reinforcement Learning (RL) algorithm.
We show that our RL-based control is able to stabilize the conductive regime and bring the onset of convection up to a Rayleigh number.
arXiv Detail & Related papers (2020-03-31T16:39:25Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.