Exploring Deep Reinforcement Learning for Holistic Smart Building
Control
- URL: http://arxiv.org/abs/2301.11510v1
- Date: Fri, 27 Jan 2023 03:03:21 GMT
- Title: Exploring Deep Reinforcement Learning for Holistic Smart Building
Control
- Authors: Xianzhong Ding, Alberto Cerpa and Wan Du
- Abstract summary: We develop a system called OCTOPUS that uses a data-driven approach to find the optimal control sequences of all building's subsystems.
OCTOPUS can achieve 14.26% and 8.1% energy savings compared with the state-of-the-art rule-based method in a LEED Gold Certified building.
- Score: 3.463438487417909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we take a holistic approach to deal with the tradeoffs between
energy use and comfort in commercial buildings. We developed a system called
OCTOPUS, which employs a novel deep reinforcement learning (DRL) framework that
uses a data-driven approach to find the optimal control sequences of all
building's subsystems, including HVAC, lighting, blind and window systems. The
DRL architecture includes a novel reward function that allows the framework to
explore the tradeoffs between energy use and users' comfort, while at the same
time enabling the solution of the high-dimensional control problem due to the
interactions of four different building subsystems. In order to cope with
OCTOPUS's data training requirements, we argue that calibrated simulations that
match the target building operational points are the vehicle to generate enough
data to be able to train our DRL framework to find the control solution for the
target building. In our work, we trained OCTOPUS with 10-year weather data and
a building model that is implemented in the EnergyPlus building simulator,
which was calibrated using data from a real production building. Through
extensive simulations, we demonstrate that OCTOPUS can achieve 14.26% and 8.1%
energy savings compared with the state-of-the-art rule-based method in a LEED
Gold Certified building and the latest DRL-based method available in the
literature respectively, while maintaining human comfort within a desired
range.
Related papers
- Real-World Data and Calibrated Simulation Suite for Offline Training of Reinforcement Learning Agents to Optimize Energy and Emission in Buildings for Environmental Sustainability [2.7624021966289605]
We present the first open source interactive HVAC control dataset extracted from live sensor measurements of devices in real office buildings.
For ease of use, our RL environments are all compatible with the OpenAI gym environment standard.
arXiv Detail & Related papers (2024-10-02T06:30:07Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - A Lightweight Calibrated Simulation Enabling Efficient Offline Learning
for Optimal Control of Real Buildings [3.2634122554914002]
We propose a novel simulation-based approach to train a Reinforcement Learning model.
Our open-source simulator is lightweight and calibrated via telemetry from the building to reach a higher level of fidelity.
This approach is an important step toward having a real-world RL control system that can be scaled to many buildings.
arXiv Detail & Related papers (2023-10-12T17:56:23Z) - BEAR: Physics-Principled Building Environment for Control and
Reinforcement Learning [9.66911049633598]
"BEAR" is a physics-principled Building Environment for Control And Reinforcement Learning.
It allows researchers to benchmark both model-based and model-free controllers using a broad collection of standard building models in Python without co-simulation using external building simulators.
We demonstrate the compatibility and performance of BEAR with different controllers, including both model predictive control (MPC) and several state-of-the-art RL methods with two case studies.
arXiv Detail & Related papers (2022-11-27T06:36:35Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Architecting and Visualizing Deep Reinforcement Learning Models [77.34726150561087]
Deep Reinforcement Learning (DRL) is a theory that aims to teach computers how to communicate with each other.
In this paper, we present a new Atari Pong game environment, a policy gradient based DRL model, a real-time network visualization, and an interactive display to help build intuition and awareness of the mechanics of DRL inference.
arXiv Detail & Related papers (2021-12-02T17:48:26Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Development of a Soft Actor Critic Deep Reinforcement Learning Approach
for Harnessing Energy Flexibility in a Large Office Building [0.0]
This research is concerned with the novel application and investigation of Soft Actor Critic' (SAC) based Deep Reinforcement Learning (DRL)
SAC is a model-free DRL technique that is able to handle continuous action spaces.
arXiv Detail & Related papers (2021-04-25T10:33:35Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.