Surrogate Empowered Sim2Real Transfer of Deep Reinforcement Learning for
ORC Superheat Control
- URL: http://arxiv.org/abs/2308.02765v1
- Date: Sat, 5 Aug 2023 01:59:44 GMT
- Title: Surrogate Empowered Sim2Real Transfer of Deep Reinforcement Learning for
ORC Superheat Control
- Authors: Runze Lin, Yangyang Luo, Xialai Wu, Junghui Chen, Biao Huang, Lei Xie,
Hongye Su
- Abstract summary: This paper proposes a Sim2Real transfer learning-based DRL control method for ORC superheat control.
Experimental results show that the proposed method greatly improves the training speed of DRL in ORC control problems.
- Score: 12.567922037611261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Organic Rankine Cycle (ORC) is widely used in industrial waste heat
recovery due to its simple structure and easy maintenance. However, in the
context of smart manufacturing in the process industry, traditional model-based
optimization control methods are unable to adapt to the varying operating
conditions of the ORC system or sudden changes in operating modes. Deep
reinforcement learning (DRL) has significant advantages in situations with
uncertainty as it directly achieves control objectives by interacting with the
environment without requiring an explicit model of the controlled plant.
Nevertheless, direct application of DRL to physical ORC systems presents
unacceptable safety risks, and its generalization performance under model-plant
mismatch is insufficient to support ORC control requirements. Therefore, this
paper proposes a Sim2Real transfer learning-based DRL control method for ORC
superheat control, which aims to provide a new simple, feasible, and
user-friendly solution for energy system optimization control. Experimental
results show that the proposed method greatly improves the training speed of
DRL in ORC control problems and solves the generalization performance issue of
the agent under multiple operating conditions through Sim2Real transfer.
Related papers
- Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - A Safe Reinforcement Learning Algorithm for Supervisory Control of Power
Plants [7.1771300511732585]
Model-free Reinforcement learning (RL) has emerged as a promising solution for control tasks.
We propose a chance-constrained RL algorithm based on Proximal Policy Optimization for supervisory control.
Our approach achieves the smallest distance of violation and violation rate in a load-follow maneuver for an advanced Nuclear Power Plant design.
arXiv Detail & Related papers (2024-01-23T17:52:49Z) - Sim-to-Real Transfer of Adaptive Control Parameters for AUV
Stabilization under Current Disturbance [1.099532646524593]
This paper presents a novel approach, merging the Maximum Entropy Deep Reinforcement Learning framework with a classic model-based control architecture, to formulate an adaptive controller.
Within this framework, we introduce a Sim-to-Real transfer strategy comprising the following components: a bio-inspired experience replay mechanism, an enhanced domain randomisation technique, and an evaluation protocol executed on a physical platform.
Our experimental assessments demonstrate that this method effectively learns proficient policies from suboptimal simulated models of the AUV, resulting in control performance 3 times higher when transferred to a real-world vehicle.
arXiv Detail & Related papers (2023-10-17T08:46:56Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - Optimizing Industrial HVAC Systems with Hierarchical Reinforcement
Learning [1.7489518849687256]
Reinforcement learning techniques have been developed to optimize industrial cooling systems, offering substantial energy savings.
A major challenge in industrial control involves learning behaviors that are feasible in the real world due to machinery constraints.
We use hierarchical reinforcement learning with multiple agents that control subsets of actions according to their operation time scales.
arXiv Detail & Related papers (2022-09-16T18:00:46Z) - Steady-State Error Compensation in Reference Tracking and Disturbance
Rejection Problems for Reinforcement Learning-Based Control [0.9023847175654602]
Reinforcement learning (RL) is a promising, upcoming topic in automatic control applications.
Initiative action state augmentation (IASA) for actor-critic-based RL controllers is introduced.
This augmentation does not require any expert knowledge, leaving the approach model free.
arXiv Detail & Related papers (2022-01-31T16:29:19Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z) - A Relearning Approach to Reinforcement Learning for Control of Smart
Buildings [1.8799681615947088]
This paper demonstrates that continual relearning of control policies using incremental deep reinforcement learning (RL) can improve policy learning for non-stationary processes.
We develop an incremental RL technique that simultaneously reduces building energy consumption without sacrificing overall comfort.
arXiv Detail & Related papers (2020-08-04T23:31:05Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z) - NeurOpt: Neural network based optimization for building energy
management and climate control [58.06411999767069]
We propose a data-driven control algorithm based on neural networks to reduce this cost of model identification.
We validate our learning and control algorithms on a two-story building with ten independently controlled zones, located in Italy.
arXiv Detail & Related papers (2020-01-22T00:51:03Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.