Optimizing Industrial HVAC Systems with Hierarchical Reinforcement
Learning
- URL: http://arxiv.org/abs/2209.08112v1
- Date: Fri, 16 Sep 2022 18:00:46 GMT
- Title: Optimizing Industrial HVAC Systems with Hierarchical Reinforcement
Learning
- Authors: William Wong, Praneet Dutta, Octavian Voicu, Yuri Chervonyi, Cosmin
Paduraru, Jerry Luo
- Abstract summary: Reinforcement learning techniques have been developed to optimize industrial cooling systems, offering substantial energy savings.
A major challenge in industrial control involves learning behaviors that are feasible in the real world due to machinery constraints.
We use hierarchical reinforcement learning with multiple agents that control subsets of actions according to their operation time scales.
- Score: 1.7489518849687256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) techniques have been developed to optimize
industrial cooling systems, offering substantial energy savings compared to
traditional heuristic policies. A major challenge in industrial control
involves learning behaviors that are feasible in the real world due to
machinery constraints. For example, certain actions can only be executed every
few hours while other actions can be taken more frequently. Without extensive
reward engineering and experimentation, an RL agent may not learn realistic
operation of machinery. To address this, we use hierarchical reinforcement
learning with multiple agents that control subsets of actions according to
their operation time scales. Our hierarchical approach achieves energy savings
over existing baselines while maintaining constraints such as operating
chillers within safe bounds in a simulated HVAC control environment.
Related papers
- GreenLight-Gym: A Reinforcement Learning Benchmark Environment for Greenhouse Crop Production Control [0.0]
Reinforcement Learning (RL) is a promising approach that can learn a control policy to automate greenhouse management.
We present GreenLight-Gym, the first open-source environment designed for training and evaluating RL algorithms on the state-of-the-art greenhouse model GreenLight.
Second, we compare two reward-shaping approaches, using either a multiplicative or additive penalty, to enforce state boundaries.
Third, we evaluate RL performance on a disjoint training and testing weather dataset, demonstrating improved generalisation to unseen conditions.
arXiv Detail & Related papers (2024-10-06T18:25:23Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - A Safe Reinforcement Learning Algorithm for Supervisory Control of Power
Plants [7.1771300511732585]
Model-free Reinforcement learning (RL) has emerged as a promising solution for control tasks.
We propose a chance-constrained RL algorithm based on Proximal Policy Optimization for supervisory control.
Our approach achieves the smallest distance of violation and violation rate in a load-follow maneuver for an advanced Nuclear Power Plant design.
arXiv Detail & Related papers (2024-01-23T17:52:49Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Surrogate Empowered Sim2Real Transfer of Deep Reinforcement Learning for
ORC Superheat Control [12.567922037611261]
This paper proposes a Sim2Real transfer learning-based DRL control method for ORC superheat control.
Experimental results show that the proposed method greatly improves the training speed of DRL in ORC control problems.
arXiv Detail & Related papers (2023-08-05T01:59:44Z) - Low Emission Building Control with Zero-Shot Reinforcement Learning [70.70479436076238]
Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency.
We show it is possible to obtain emission-reducing policies without a priori--a paradigm we call zero-shot building control.
arXiv Detail & Related papers (2022-08-12T17:13:25Z) - Enforcing Policy Feasibility Constraints through Differentiable
Projection for Energy Optimization [57.88118988775461]
We propose PROjected Feasibility (PROF) to enforce convex operational constraints within neural policies.
We demonstrate PROF on two applications: energy-efficient building operation and inverter control.
arXiv Detail & Related papers (2021-05-19T01:58:10Z) - Development of a Soft Actor Critic Deep Reinforcement Learning Approach
for Harnessing Energy Flexibility in a Large Office Building [0.0]
This research is concerned with the novel application and investigation of Soft Actor Critic' (SAC) based Deep Reinforcement Learning (DRL)
SAC is a model-free DRL technique that is able to handle continuous action spaces.
arXiv Detail & Related papers (2021-04-25T10:33:35Z) - Efficient Transformers in Reinforcement Learning using Actor-Learner
Distillation [91.05073136215886]
"Actor-Learner Distillation" transfers learning progress from a large capacity learner model to a small capacity actor model.
We demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model.
arXiv Detail & Related papers (2021-04-04T17:56:34Z) - A Relearning Approach to Reinforcement Learning for Control of Smart
Buildings [1.8799681615947088]
This paper demonstrates that continual relearning of control policies using incremental deep reinforcement learning (RL) can improve policy learning for non-stationary processes.
We develop an incremental RL technique that simultaneously reduces building energy consumption without sacrificing overall comfort.
arXiv Detail & Related papers (2020-08-04T23:31:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.