Real-World Data and Calibrated Simulation Suite for Offline Training of Reinforcement Learning Agents to Optimize Energy and Emission in Buildings for Environmental Sustainability
- URL: http://arxiv.org/abs/2410.03756v1
- Date: Wed, 2 Oct 2024 06:30:07 GMT
- Title: Real-World Data and Calibrated Simulation Suite for Offline Training of Reinforcement Learning Agents to Optimize Energy and Emission in Buildings for Environmental Sustainability
- Authors: Judah Goldfeder, John Sipple,
- Abstract summary: We present the first open source interactive HVAC control dataset extracted from live sensor measurements of devices in real office buildings.
For ease of use, our RL environments are all compatible with the OpenAI gym environment standard.
- Score: 2.7624021966289605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Commercial office buildings contribute 17 percent of Carbon Emissions in the US, according to the US Energy Information Administration (EIA), and improving their efficiency will reduce their environmental burden and operating cost. A major contributor of energy consumption in these buildings are the Heating, Ventilation, and Air Conditioning (HVAC) devices. HVAC devices form a complex and interconnected thermodynamic system with the building and outside weather conditions, and current setpoint control policies are not fully optimized for minimizing energy use and carbon emission. Given a suitable training environment, a Reinforcement Learning (RL) agent is able to improve upon these policies, but training such a model, especially in a way that scales to thousands of buildings, presents many practical challenges. Most existing work on applying RL to this important task either makes use of proprietary data, or focuses on expensive and proprietary simulations that may not be grounded in the real world. We present the Smart Buildings Control Suite, the first open source interactive HVAC control dataset extracted from live sensor measurements of devices in real office buildings. The dataset consists of two components: six years of real-world historical data from three buildings, for offline RL, and a lightweight interactive simulator for each of these buildings, calibrated using the historical data, for online and model-based RL. For ease of use, our RL environments are all compatible with the OpenAI gym environment standard. We also demonstrate a novel method of calibrating the simulator, as well as baseline results on training an RL agent on the simulator, predicting real-world data, and training an RL agent directly from data. We believe this benchmark will accelerate progress and collaboration on building optimization and environmental sustainability research.
Related papers
- EnergyPlus Room Simulator [0.34263545581620375]
We present the tool EnergyPlus Room Simulator, which enables the simulation of indoor climate in a specific room of a building.
It allows to alter room models and simulate various factors such as temperature, humidity, and CO2 concentration.
The tool is intended to support scientific, building-related tasks such as occupancy detection on a room level by facilitating fast access to simulation data.
arXiv Detail & Related papers (2024-10-25T07:57:23Z) - Global Transformer Architecture for Indoor Room Temperature Forecasting [49.32130498861987]
This work presents a global Transformer architecture for indoor temperature forecasting in multi-room buildings.
It aims at optimizing energy consumption and reducing greenhouse gas emissions associated with HVAC systems.
Notably, this study is the first to apply a Transformer architecture for indoor temperature forecasting in multi-room buildings.
arXiv Detail & Related papers (2023-10-31T14:09:32Z) - A Lightweight Calibrated Simulation Enabling Efficient Offline Learning
for Optimal Control of Real Buildings [3.2634122554914002]
We propose a novel simulation-based approach to train a Reinforcement Learning model.
Our open-source simulator is lightweight and calibrated via telemetry from the building to reach a higher level of fidelity.
This approach is an important step toward having a real-world RL control system that can be scaled to many buildings.
arXiv Detail & Related papers (2023-10-12T17:56:23Z) - Exploring Deep Reinforcement Learning for Holistic Smart Building
Control [3.463438487417909]
We develop a system called OCTOPUS that uses a data-driven approach to find the optimal control sequences of all building's subsystems.
OCTOPUS can achieve 14.26% and 8.1% energy savings compared with the state-of-the-art rule-based method in a LEED Gold Certified building.
arXiv Detail & Related papers (2023-01-27T03:03:21Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - A Dynamic Feedforward Control Strategy for Energy-efficient Building
System Operation [59.56144813928478]
In current control strategies and optimization algorithms, most of them rely on receiving information from real-time feedback.
We propose an engineer-friendly control strategy framework that embeds dynamic prior knowledge from building system characteristics simultaneously for system control.
We tested it in a case for heating system control with typical control strategies, which shows our framework owns a further energy-saving potential of 15%.
arXiv Detail & Related papers (2023-01-23T09:07:07Z) - BEAR: Physics-Principled Building Environment for Control and
Reinforcement Learning [9.66911049633598]
"BEAR" is a physics-principled Building Environment for Control And Reinforcement Learning.
It allows researchers to benchmark both model-based and model-free controllers using a broad collection of standard building models in Python without co-simulation using external building simulators.
We demonstrate the compatibility and performance of BEAR with different controllers, including both model predictive control (MPC) and several state-of-the-art RL methods with two case studies.
arXiv Detail & Related papers (2022-11-27T06:36:35Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Using Machine Learning at Scale in HPC Simulations with SmartSim: An
Application to Ocean Climate Modeling [52.77024349608834]
We demonstrate the first climate-scale, numerical ocean simulations improved through distributed, online inference of Deep Neural Networks (DNN) using SmartSim.
SmartSim is a library dedicated to enabling online analysis and Machine Learning (ML) for traditional HPC simulations.
arXiv Detail & Related papers (2021-04-13T19:27:28Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.