Transfer learning for process design with reinforcement learning
- URL: http://arxiv.org/abs/2302.03375v1
- Date: Tue, 7 Feb 2023 10:31:14 GMT
- Title: Transfer learning for process design with reinforcement learning
- Authors: Qinghe Gao, Haoyu Yang, Shachi M. Shanbhag, Artur M. Schweidtmann
- Abstract summary: We propose to utilize transfer learning for process design with RL in combination with rigorous simulation methods.
transfer learning is an established approach from machine learning that stores knowledge gained while solving one problem and reuses this information on a different target domain.
Our results show that transfer learning enables RL to economically design feasible flowsheets with DWSIM, resulting in a flowsheet with an 8% higher revenue.
- Score: 3.3084327202914476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Process design is a creative task that is currently performed manually by
engineers. Artificial intelligence provides new potential to facilitate process
design. Specifically, reinforcement learning (RL) has shown some success in
automating process design by integrating data-driven models that learn to build
process flowsheets with process simulation in an iterative design process.
However, one major challenge in the learning process is that the RL agent
demands numerous process simulations in rigorous process simulators, thereby
requiring long simulation times and expensive computational power. Therefore,
typically short-cut simulation methods are employed to accelerate the learning
process. Short-cut methods can, however, lead to inaccurate results. We thus
propose to utilize transfer learning for process design with RL in combination
with rigorous simulation methods. Transfer learning is an established approach
from machine learning that stores knowledge gained while solving one problem
and reuses this information on a different target domain. We integrate transfer
learning in our RL framework for process design and apply it to an illustrative
case study comprising equilibrium reactions, azeotropic separation, and
recycles, our method can design economically feasible flowsheets with stable
interaction with DWSIM. Our results show that transfer learning enables RL to
economically design feasible flowsheets with DWSIM, resulting in a flowsheet
with an 8% higher revenue. And the learning time can be reduced by a factor of
2.
Related papers
- Machine learning surrogates for efficient hydrologic modeling: Insights from stochastic simulations of managed aquifer recharge [0.0]
We propose a hybrid modeling workflow for process-based hydrologic models and machine learning surrogate models.
As a case study, we apply this workflow to simulations of variably saturated groundwater flow at a prospective managed aquifer recharge site.
Our results demonstrate that ML surrogate models can achieve under 10% mean absolute percentage error and yield order-of-magnitude runtime savings.
arXiv Detail & Related papers (2024-07-30T15:24:27Z) - The Artificial Neural Twin -- Process Optimization and Continual Learning in Distributed Process Chains [3.79770624632814]
We propose the Artificial Neural Twin, which combines concepts from model predictive control, deep learning, and sensor networks.
Our approach introduces differentiable data fusion to estimate the state of distributed process steps.
By treating the interconnected process steps as a quasi neural-network, we can backpropagate loss gradients for process optimization or model fine-tuning to process parameters.
arXiv Detail & Related papers (2024-03-27T08:34:39Z) - Supervised Pretraining Can Learn In-Context Reinforcement Learning [96.62869749926415]
In this paper, we study the in-context learning capabilities of transformers in decision-making problems.
We introduce and study Decision-Pretrained Transformer (DPT), a supervised pretraining method where the transformer predicts an optimal action.
We find that the pretrained transformer can be used to solve a range of RL problems in-context, exhibiting both exploration online and conservatism offline.
arXiv Detail & Related papers (2023-06-26T17:58:50Z) - Data efficient surrogate modeling for engineering design: Ensemble-free
batch mode deep active learning for regression [0.6021787236982659]
We propose a simple and scalable approach for active learning that works in a student-teacher manner to train a surrogate model.
By using this proposed approach, we are able to achieve the same level of surrogate accuracy as the other baselines like DBAL and Monte Carlo sampling.
arXiv Detail & Related papers (2022-11-16T02:31:57Z) - Simulation-Based Parallel Training [55.41644538483948]
We present our ongoing work to design a training framework that alleviates those bottlenecks.
It generates data in parallel with the training process.
We present a strategy to mitigate this bias with a memory buffer.
arXiv Detail & Related papers (2022-11-08T09:31:25Z) - SAM-RL: Sensing-Aware Model-Based Reinforcement Learning via
Differentiable Physics-Based Simulation and Rendering [49.78647219715034]
We propose a sensing-aware model-based reinforcement learning system called SAM-RL.
With the sensing-aware learning pipeline, SAM-RL allows a robot to select an informative viewpoint to monitor the task process.
We apply our framework to real world experiments for accomplishing three manipulation tasks: robotic assembly, tool manipulation, and deformable object manipulation.
arXiv Detail & Related papers (2022-10-27T05:30:43Z) - Flowsheet synthesis through hierarchical reinforcement learning and
graph neural networks [0.4588028371034406]
We propose a reinforcement learning algorithm for chemical process design based on actor-critic logic.
Our proposed algorithm represents chemical processes as graphs and uses graph convolutional neural networks to learn from process graphs.
arXiv Detail & Related papers (2022-07-25T10:42:15Z) - A Workflow for Offline Model-Free Robotic Reinforcement Learning [117.07743713715291]
offline reinforcement learning (RL) enables learning control policies by utilizing only prior experience, without any online interaction.
We develop a practical workflow for using offline RL analogous to the relatively well-understood for supervised learning problems.
We demonstrate the efficacy of this workflow in producing effective policies without any online tuning.
arXiv Detail & Related papers (2021-09-22T16:03:29Z) - Efficient Transformers in Reinforcement Learning using Actor-Learner
Distillation [91.05073136215886]
"Actor-Learner Distillation" transfers learning progress from a large capacity learner model to a small capacity actor model.
We demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model.
arXiv Detail & Related papers (2021-04-04T17:56:34Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.