Low Emission Building Control with Zero-Shot Reinforcement Learning
- URL: http://arxiv.org/abs/2208.06385v2
- Date: Mon, 15 Aug 2022 20:13:18 GMT
- Title: Low Emission Building Control with Zero-Shot Reinforcement Learning
- Authors: Scott R. Jeen, Alessandro Abate, Jonathan M. Cullen
- Abstract summary: Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
- Score: 70.70479436076238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heating and cooling systems in buildings account for 31\% of global energy
use, much of which are regulated by Rule Based Controllers (RBCs) that neither
maximise energy efficiency nor minimise emissions by interacting optimally with
the grid. Control via Reinforcement Learning (RL) has been shown to
significantly improve building energy efficiency, but existing solutions
require access to building-specific simulators or data that cannot be expected
for every building in the world. In response, we show it is possible to obtain
emission-reducing policies without such knowledge a priori--a paradigm we call
zero-shot building control. We combine ideas from system identification and
model-based RL to create PEARL (Probabilistic Emission-Abating Reinforcement
Learning) and show that a short period of active exploration is all that is
required to build a performant model. In experiments across three varied
building energy simulations, we show PEARL outperforms an existing RBC once,
and popular RL baselines in all cases, reducing building emissions by as much
as 31\% whilst maintaining thermal comfort. Our source code is available online
via https://enjeeneer.io/projects/pearl .
Related papers
- Real-World Data and Calibrated Simulation Suite for Offline Training of Reinforcement Learning Agents to Optimize Energy and Emission in Buildings for Environmental Sustainability [2.7624021966289605]
We present the first open source interactive HVAC control dataset extracted from live sensor measurements of devices in real office buildings.
For ease of use, our RL environments are all compatible with the OpenAI gym environment standard.
arXiv Detail & Related papers (2024-10-02T06:30:07Z) - A Benchmark Environment for Offline Reinforcement Learning in Racing Games [54.83171948184851]
Offline Reinforcement Learning (ORL) is a promising approach to reduce the high sample complexity of traditional Reinforcement Learning (RL)
This paper introduces OfflineMania, a novel environment for ORL research.
It is inspired by the iconic TrackMania series and developed using the Unity 3D game engine.
arXiv Detail & Related papers (2024-07-12T16:44:03Z) - Global Transformer Architecture for Indoor Room Temperature Forecasting [49.32130498861987]
This work presents a global Transformer architecture for indoor temperature forecasting in multi-room buildings.
It aims at optimizing energy consumption and reducing greenhouse gas emissions associated with HVAC systems.
Notably, this study is the first to apply a Transformer architecture for indoor temperature forecasting in multi-room buildings.
arXiv Detail & Related papers (2023-10-31T14:09:32Z) - Real-World Implementation of Reinforcement Learning Based Energy
Coordination for a Cluster of Households [3.901860248668672]
We present a real-life pilot study that studies the effectiveness of reinforcement-learning (RL) in coordinating the power consumption of 8 residential buildings to jointly track a target power signal.
Our results demonstrate satisfactory power tracking, and the effectiveness of the RL-based ranks which are learnt in a purely data-driven manner.
arXiv Detail & Related papers (2023-10-29T21:10:38Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - MERLIN: Multi-agent offline and transfer learning for occupant-centric
energy flexible operation of grid-interactive communities using smart meter
data and CityLearn [0.0]
Decarbonization of buildings presents new challenges for the reliability of the electrical grid.
We propose the MERLIN framework and use a digital twin of a real-world grid-interactive residential community in CityLearn.
We show that independent RL-controllers for batteries improve building and district level compared to a reference by tailoring their policies to individual buildings.
arXiv Detail & Related papers (2022-12-31T21:37:14Z) - Zero-Shot Building Control [0.0]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
Existing solutions require pre-training in simulators that are prohibitively expensive to obtain for every building in the world.
We show it is possible to perform safe, zero-shot control of buildings by combining ideas from system identification and model-based RL.
arXiv Detail & Related papers (2022-06-28T17:56:40Z) - Development of a Soft Actor Critic Deep Reinforcement Learning Approach
for Harnessing Energy Flexibility in a Large Office Building [0.0]
This research is concerned with the novel application and investigation of Soft Actor Critic' (SAC) based Deep Reinforcement Learning (DRL)
SAC is a model-free DRL technique that is able to handle continuous action spaces.
arXiv Detail & Related papers (2021-04-25T10:33:35Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.