Explainable Reinforcement Learning-based Home Energy Management Systems using Differentiable Decision Trees
- URL: http://arxiv.org/abs/2403.11947v1
- Date: Mon, 18 Mar 2024 16:40:41 GMT
- Title: Explainable Reinforcement Learning-based Home Energy Management Systems using Differentiable Decision Trees
- Authors: Gargya Gokhale, Bert Claessens, Chris Develder,
- Abstract summary: The residential sector is another major and largely untapped source of flexibility, driven by the increased adoption of solar PV, home batteries, and EVs.
We introduce a reinforcement learning-based approach using differentiable decision trees.
This approach integrates the scalability of data-driven reinforcement learning with the explainability of (differentiable) decision trees.
As a proof-of-concept, we analyze our method using a home energy management problem, comparing its performance with commercially available rule-based baseline and standard neural network-based RL controllers.
- Score: 4.573008040057806
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the ongoing energy transition, demand-side flexibility has become an important aspect of the modern power grid for providing grid support and allowing further integration of sustainable energy sources. Besides traditional sources, the residential sector is another major and largely untapped source of flexibility, driven by the increased adoption of solar PV, home batteries, and EVs. However, unlocking this residential flexibility is challenging as it requires a control framework that can effectively manage household energy consumption, and maintain user comfort while being readily scalable across different, diverse houses. We aim to address this challenging problem and introduce a reinforcement learning-based approach using differentiable decision trees. This approach integrates the scalability of data-driven reinforcement learning with the explainability of (differentiable) decision trees. This leads to a controller that can be easily adapted across different houses and provides a simple control policy that can be explained to end-users, further improving user acceptance. As a proof-of-concept, we analyze our method using a home energy management problem, comparing its performance with commercially available rule-based baseline and standard neural network-based RL controllers. Through this preliminary study, we show that the performance of our proposed method is comparable to standard RL-based controllers, outperforming baseline controllers by ~20% in terms of daily cost savings while being straightforward to explain.
Related papers
- CityLearn v2: Energy-flexible, resilient, occupant-centric, and carbon-aware management of grid-interactive communities [8.658740257657564]
CityLearn provides an environment for benchmarking simple and advanced distributed energy resource control algorithms.
This work details the v2 environment design and provides application examples that utilize reinforcement learning to manage battery energy storage system charging/discharging cycles, vehicle-to-grid control, and thermal comfort during heat pump power modulation.
arXiv Detail & Related papers (2024-05-02T16:31:09Z) - Decentralized Coordination of Distributed Energy Resources through Local Energy Markets and Deep Reinforcement Learning [1.8434042562191815]
Transactive energy, implemented through local energy markets, has recently garnered attention as a promising solution to the grid challenges.
This study addresses the gap by training a set of deep reinforcement learning agents to automate end-user participation in ALEX.
The study unveils a clear correlation between bill reduction and reduced net load variability in this setup.
arXiv Detail & Related papers (2024-04-19T19:03:33Z) - Distill2Explain: Differentiable decision trees for explainable reinforcement learning in energy application controllers [5.311053322050159]
Residential sector is an important (potential) source of energy flexibility.
A potential control framework for such a task is data-driven control, specifically model-free reinforcement learning (RL)
RLs learn a good control policy by interacting with their environment, learning purely based on data and with minimal human intervention.
We propose a novel method to obtain explainable RL policies by using differentiable decision trees.
arXiv Detail & Related papers (2024-03-18T16:09:49Z) - Real-World Implementation of Reinforcement Learning Based Energy
Coordination for a Cluster of Households [3.901860248668672]
We present a real-life pilot study that studies the effectiveness of reinforcement-learning (RL) in coordinating the power consumption of 8 residential buildings to jointly track a target power signal.
Our results demonstrate satisfactory power tracking, and the effectiveness of the RL-based ranks which are learnt in a purely data-driven manner.
arXiv Detail & Related papers (2023-10-29T21:10:38Z) - Adaptive Resource Allocation for Virtualized Base Stations in O-RAN with
Online Learning [60.17407932691429]
Open Radio Access Network systems, with their base stations (vBSs), offer operators the benefits of increased flexibility, reduced costs, vendor diversity, and interoperability.
We propose an online learning algorithm that balances the effective throughput and vBS energy consumption, even under unforeseeable and "challenging'' environments.
We prove the proposed solutions achieve sub-linear regret, providing zero average optimality gap even in challenging environments.
arXiv Detail & Related papers (2023-09-04T17:30:21Z) - Non-Intrusive Electric Load Monitoring Approach Based on Current Feature
Visualization for Smart Energy Management [51.89904044860731]
We employ computer vision techniques of AI to design a non-invasive load monitoring method for smart electric energy management.
We propose to recognize all electric loads from color feature images using a U-shape deep neural network with multi-scale feature extraction and attention mechanism.
arXiv Detail & Related papers (2023-08-08T04:52:19Z) - MERLIN: Multi-agent offline and transfer learning for occupant-centric
energy flexible operation of grid-interactive communities using smart meter
data and CityLearn [0.0]
Decarbonization of buildings presents new challenges for the reliability of the electrical grid.
We propose the MERLIN framework and use a digital twin of a real-world grid-interactive residential community in CityLearn.
We show that independent RL-controllers for batteries improve building and district level compared to a reference by tailoring their policies to individual buildings.
arXiv Detail & Related papers (2022-12-31T21:37:14Z) - Distributed Energy Management and Demand Response in Smart Grids: A
Multi-Agent Deep Reinforcement Learning Framework [53.97223237572147]
This paper presents a multi-agent Deep Reinforcement Learning (DRL) framework for autonomous control and integration of renewable energy resources into smart power grid systems.
In particular, the proposed framework jointly considers demand response (DR) and distributed energy management (DEM) for residential end-users.
arXiv Detail & Related papers (2022-11-29T01:18:58Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.