BuildingGym: An open-source toolbox for AI-based building energy management using reinforcement learning
- URL: http://arxiv.org/abs/2509.11922v1
- Date: Mon, 15 Sep 2025 13:37:48 GMT
- Title: BuildingGym: An open-source toolbox for AI-based building energy management using reinforcement learning
- Authors: Xilei Dai, Ruotian Chen, Songze Guan, Wen-Tai Li, Chau Yuen,
- Abstract summary: BuildingGym is a framework for training RL control strategies for common challenges in building energy management.<n>BuildingGym integrates EnergyPlus as its core simulator, making it suitable for both system-level and room-level control.<n>The tool provides several built-in RL algorithms for control strategy training, simplifying the process for building managers to obtain optimal control strategies.
- Score: 39.90950200001865
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Reinforcement learning (RL) has proven effective for AI-based building energy management. However, there is a lack of flexible framework to implement RL across various control problems in building energy management. To address this gap, we propose BuildingGym, an open-source tool designed as a research-friendly and flexible framework for training RL control strategies for common challenges in building energy management. BuildingGym integrates EnergyPlus as its core simulator, making it suitable for both system-level and room-level control. Additionally, BuildingGym is able to accept external signals as control inputs instead of taking the building as a stand-alone entity. This feature makes BuildingGym applicable for more flexible environments, e.g. smart grid and EVs community. The tool provides several built-in RL algorithms for control strategy training, simplifying the process for building managers to obtain optimal control strategies. Users can achieve this by following a few straightforward steps to configure BuildingGym for optimization control for common problems in the building energy management field. Moreover, AI specialists can easily implement and test state-of-the-art control algorithms within the platform. BuildingGym bridges the gap between building managers and AI specialists by allowing for the easy configuration and replacement of RL algorithms, simulators, and control environments or problems. With BuildingGym, we efficiently set up training tasks for cooling load management, targeting both constant and dynamic cooling load management. The built-in algorithms demonstrated strong performance across both tasks, highlighting the effectiveness of BuildingGym in optimizing cooling strategies.
Related papers
- Multi-Agent Architecture in Distributed Environment Control Systems: vision, challenges, and opportunities [50.38638300332429]
We propose a multi-agent architecture for distributed control of air-cooled chiller systems in data centers.<n>Our vision employs autonomous agents to monitor and regulate local operational parameters and optimize system-wide efficiency.
arXiv Detail & Related papers (2025-02-21T18:41:03Z) - Generalising Battery Control in Net-Zero Buildings via Personalised Federated RL [5.195669033269619]
This work studies the challenge of optimal energy management in building-based microgrids through a collaborative and privacy-preserving framework.<n>We evaluate two common RL algorithms (PPO and TRPO) in different collaborative setups to manage distributed energy resources.<n>Our approach emphasizes reducing energy costs and carbon emissions while ensuring privacy.
arXiv Detail & Related papers (2024-12-30T13:38:31Z) - Active Reinforcement Learning for Robust Building Control [0.0]
Reinforcement learning (RL) is a powerful tool for optimal control that has found great success in Atari games, the game of Go, robotic control, and building optimization.
Unsupervised environment design (UED) has been proposed as a solution to this problem, in which the agent trains in environments that have been specially selected to help it learn.
We show that ActivePLR is able to outperform state-of-the-art UED algorithms in minimizing energy usage while maximizing occupant comfort in the setting of building control.
arXiv Detail & Related papers (2023-12-16T02:18:45Z) - A Distributed ADMM-based Deep Learning Approach for Thermal Control in Multi-Zone Buildings under Demand Response Events [1.1126342180866646]
This research combines distributed optimization using ADMM with deep learning models to plan indoor temperature setpoints effectively.
A two-layer hierarchical structure is used, with a central building coordinator at the upper layer and local controllers at the thermal zone layer.
The proposed algorithm, called Distributed Planning Networks, is designed to be both adaptable and scalable to many types of buildings.
arXiv Detail & Related papers (2023-12-08T14:46:50Z) - Exploring Deep Reinforcement Learning for Holistic Smart Building
Control [3.463438487417909]
We develop a system called OCTOPUS that uses a data-driven approach to find the optimal control sequences of all building's subsystems.
OCTOPUS can achieve 14.26% and 8.1% energy savings compared with the state-of-the-art rule-based method in a LEED Gold Certified building.
arXiv Detail & Related papers (2023-01-27T03:03:21Z) - A Dynamic Feedforward Control Strategy for Energy-efficient Building
System Operation [59.56144813928478]
In current control strategies and optimization algorithms, most of them rely on receiving information from real-time feedback.
We propose an engineer-friendly control strategy framework that embeds dynamic prior knowledge from building system characteristics simultaneously for system control.
We tested it in a case for heating system control with typical control strategies, which shows our framework owns a further energy-saving potential of 15%.
arXiv Detail & Related papers (2023-01-23T09:07:07Z) - BEAR: Physics-Principled Building Environment for Control and
Reinforcement Learning [9.66911049633598]
"BEAR" is a physics-principled Building Environment for Control And Reinforcement Learning.
It allows researchers to benchmark both model-based and model-free controllers using a broad collection of standard building models in Python without co-simulation using external building simulators.
We demonstrate the compatibility and performance of BEAR with different controllers, including both model predictive control (MPC) and several state-of-the-art RL methods with two case studies.
arXiv Detail & Related papers (2022-11-27T06:36:35Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.