One for Many: Transfer Learning for Building HVAC Control
- URL: http://arxiv.org/abs/2008.03625v2
- Date: Tue, 20 Oct 2020 01:00:22 GMT
- Title: One for Many: Transfer Learning for Building HVAC Control
- Authors: Shichao Xu, Yixuan Wang, Yanzhi Wang, Zheng O'Neill, Qi Zhu
- Abstract summary: We present a novel transfer learning based approach to overcome this challenge.
Our approach can effectively transfer a DRL-based HVAC controller trained for the source building to a controller for the target building with minimal effort and improved performance.
- Score: 24.78264822089494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of building heating, ventilation, and air conditioning (HVAC)
system is critically important, as it accounts for around half of building
energy consumption and directly affects occupant comfort, productivity, and
health. Traditional HVAC control methods are typically based on creating
explicit physical models for building thermal dynamics, which often require
significant effort to develop and are difficult to achieve sufficient accuracy
and efficiency for runtime building control and scalability for field
implementations. Recently, deep reinforcement learning (DRL) has emerged as a
promising data-driven method that provides good control performance without
analyzing physical models at runtime. However, a major challenge to DRL (and
many other data-driven learning methods) is the long training time it takes to
reach the desired performance. In this work, we present a novel transfer
learning based approach to overcome this challenge. Our approach can
effectively transfer a DRL-based HVAC controller trained for the source
building to a controller for the target building with minimal effort and
improved performance, by decomposing the design of neural network controller
into a transferable front-end network that captures building-agnostic behavior
and a back-end network that can be efficiently trained for each specific
building. We conducted experiments on a variety of transfer scenarios between
buildings with different sizes, numbers of thermal zones, materials and
layouts, air conditioner types, and ambient weather conditions. The
experimental results demonstrated the effectiveness of our approach in
significantly reducing the training time, energy cost, and temperature
violations.
Related papers
- Real-World Data and Calibrated Simulation Suite for Offline Training of Reinforcement Learning Agents to Optimize Energy and Emission in Buildings for Environmental Sustainability [2.7624021966289605]
We present the first open source interactive HVAC control dataset extracted from live sensor measurements of devices in real office buildings.
For ease of use, our RL environments are all compatible with the OpenAI gym environment standard.
arXiv Detail & Related papers (2024-10-02T06:30:07Z) - Continual Diffuser (CoD): Mastering Continual Offline Reinforcement Learning with Experience Rehearsal [54.93261535899478]
In real-world applications, such as robotic control of reinforcement learning, the tasks are changing, and new tasks arise in a sequential order.
This situation poses the new challenge of plasticity-stability trade-off for training an agent who can adapt to task changes and retain acquired knowledge.
We propose a rehearsal-based continual diffusion model, called Continual diffuser (CoD), to endow the diffuser with the capabilities of quick adaptation (plasticity) and lasting retention (stability)
arXiv Detail & Related papers (2024-09-04T08:21:47Z) - Employing Federated Learning for Training Autonomous HVAC Systems [3.4137115855910767]
Buildings account for 40 % of global energy consumption.
Implementing smart, energy-efficient HVAC systems has the potential to significantly impact the course of climate change.
Model-free reinforcement learning algorithms have been shown to outperform classical controllers in terms of energy cost and consumption, as well as thermal comfort.
arXiv Detail & Related papers (2024-05-01T08:42:22Z) - Efficient Data-Driven MPC for Demand Response of Commercial Buildings [0.0]
We propose a data-driven and mixed-integer bidding strategy for energy management in small commercial buildings.
We consider rooftop unit heating, air conditioning systems with discrete controls to accurately model the operation of most commercial buildings.
We apply our approach in several demand response (DR) settings, including a time-of-use, and a critical rebate bidding.
arXiv Detail & Related papers (2024-01-28T20:01:44Z) - Global Transformer Architecture for Indoor Room Temperature Forecasting [49.32130498861987]
This work presents a global Transformer architecture for indoor temperature forecasting in multi-room buildings.
It aims at optimizing energy consumption and reducing greenhouse gas emissions associated with HVAC systems.
Notably, this study is the first to apply a Transformer architecture for indoor temperature forecasting in multi-room buildings.
arXiv Detail & Related papers (2023-10-31T14:09:32Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - DL-DRL: A double-level deep reinforcement learning approach for
large-scale task scheduling of multi-UAV [65.07776277630228]
We propose a double-level deep reinforcement learning (DL-DRL) approach based on a divide and conquer framework (DCF)
Particularly, we design an encoder-decoder structured policy network in our upper-level DRL model to allocate the tasks to different UAVs.
We also exploit another attention based policy network in our lower-level DRL model to construct the route for each UAV, with the objective to maximize the number of executed tasks.
arXiv Detail & Related papers (2022-08-04T04:35:53Z) - Development of a Soft Actor Critic Deep Reinforcement Learning Approach
for Harnessing Energy Flexibility in a Large Office Building [0.0]
This research is concerned with the novel application and investigation of Soft Actor Critic' (SAC) based Deep Reinforcement Learning (DRL)
SAC is a model-free DRL technique that is able to handle continuous action spaces.
arXiv Detail & Related papers (2021-04-25T10:33:35Z) - Efficient Transformers in Reinforcement Learning using Actor-Learner
Distillation [91.05073136215886]
"Actor-Learner Distillation" transfers learning progress from a large capacity learner model to a small capacity actor model.
We demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model.
arXiv Detail & Related papers (2021-04-04T17:56:34Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.