SMPL: Simulated Industrial Manufacturing and Process Control Learning
Environments
- URL: http://arxiv.org/abs/2206.08851v1
- Date: Fri, 17 Jun 2022 15:51:35 GMT
- Title: SMPL: Simulated Industrial Manufacturing and Process Control Learning
Environments
- Authors: Mohan Zhang, Xiaozhou Wang, Benjamin Decardi-Nelson, Bo Song, An
Zhang, Jinfeng Liu, Sile Tao, Jiayi Cheng, Xiaohong Liu, DengDeng Yu, Matthew
Poon, Animesh Garg
- Abstract summary: There is little exploration of applying deep reinforcement learning to control manufacturing plants.
We develop an easy-to-use library that includes five high-fidelity simulation environments.
We benchmark online and offline, model-based and model-free reinforcement learning algorithms for comparisons of follow-up research.
- Score: 26.451888230418746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional biological and pharmaceutical manufacturing plants are controlled
by human workers or pre-defined thresholds. Modernized factories have advanced
process control algorithms such as model predictive control (MPC). However,
there is little exploration of applying deep reinforcement learning to control
manufacturing plants. One of the reasons is the lack of high fidelity
simulations and standard APIs for benchmarking. To bridge this gap, we develop
an easy-to-use library that includes five high-fidelity simulation
environments: BeerFMTEnv, ReactorEnv, AtropineEnv, PenSimEnv and mAbEnv, which
cover a wide range of manufacturing processes. We build these environments on
published dynamics models. Furthermore, we benchmark online and offline,
model-based and model-free reinforcement learning algorithms for comparisons of
follow-up research.
Related papers
- R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models [50.19174067263255]
We introduce prior preference learning techniques and self-revision schedules to help the agent excel in sparse-reward, continuous action, goal-based robotic control POMDP environments.
We show that our agents offer improved performance over state-of-the-art models in terms of cumulative rewards, relative stability, and success rate.
arXiv Detail & Related papers (2024-09-21T18:32:44Z) - MFRL-BI: Design of a Model-free Reinforcement Learning Process Control
Scheme by Using Bayesian Inference [5.375049126954924]
Design of process control scheme is critical for quality assurance to reduce variations in manufacturing systems.
We propose a model-free reinforcement learning (MFRL) approach to conduct experiments and optimize control simultaneously according to real-time data.
arXiv Detail & Related papers (2023-09-17T08:18:55Z) - Reduced Order Modeling of a MOOSE-based Advanced Manufacturing Model
with Operator Learning [2.517043342442487]
Advanced Manufacturing (AM) has gained significant interest in the nuclear community for its potential application on nuclear materials.
One challenge is to obtain desired material properties via controlling the manufacturing process during runtime.
Intelligent AM based on deep reinforcement learning (DRL) relies on an automated process-level control mechanism to generate optimal design variables.
arXiv Detail & Related papers (2023-08-18T17:38:00Z) - A Comparative Study of Machine Learning Algorithms for Anomaly Detection
in Industrial Environments: Performance and Environmental Impact [62.997667081978825]
This study seeks to address the demands of high-performance machine learning models with environmental sustainability.
Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance.
However, superior outcomes were obtained with optimised configurations, albeit with a commensurate increase in resource consumption.
arXiv Detail & Related papers (2023-07-01T15:18:00Z) - A Generative Approach for Production-Aware Industrial Network Traffic
Modeling [70.46446906513677]
We investigate the network traffic data generated from a laser cutting machine deployed in a Trumpf factory in Germany.
We analyze the traffic statistics, capture the dependencies between the internal states of the machine, and model the network traffic as a production state dependent process.
We compare the performance of various generative models including variational autoencoder (VAE), conditional variational autoencoder (CVAE), and generative adversarial network (GAN)
arXiv Detail & Related papers (2022-11-11T09:46:58Z) - Generating Hidden Markov Models from Process Models Through Nonnegative Tensor Factorization [0.0]
We introduce a novel mathematically sound method that integrates theoretical process models with interrelated minimal Hidden Markov Models.
Our method consolidates: (a) theoretical process models, (b) HMMs, (c) coupled nonnegative matrix-tensor factorizations, and (d) custom model selection.
arXiv Detail & Related papers (2022-10-03T16:19:27Z) - Deep Learning based pipeline for anomaly detection and quality
enhancement in industrial binder jetting processes [68.8204255655161]
Anomaly detection describes methods of finding abnormal states, instances or data points that differ from a normal value space.
This paper contributes to a data-centric way of approaching artificial intelligence in industrial production.
arXiv Detail & Related papers (2022-09-21T08:14:34Z) - Towards Standardizing Reinforcement Learning Approaches for Stochastic
Production Scheduling [77.34726150561087]
reinforcement learning can be used to solve scheduling problems.
Existing studies rely on (sometimes) complex simulations for which the code is unavailable.
There is a vast array of RL designs to choose from.
standardization of model descriptions - both production setup and RL design - and validation scheme are a prerequisite.
arXiv Detail & Related papers (2021-04-16T16:07:10Z) - Sim-Env: Decoupling OpenAI Gym Environments from Simulation Models [0.0]
Reinforcement learning (RL) is one of the most active fields of AI research.
Development methodology still lags behind, with a severe lack of standard APIs to foster the development of RL applications.
We present a workflow and tools for the decoupled development and maintenance of multi-purpose agent-based models and derived single-purpose reinforcement learning environments.
arXiv Detail & Related papers (2021-02-19T09:25:21Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z) - Data-driven surrogate modelling and benchmarking for process equipment [1.8395181176356432]
A suite of computational fluid dynamics (CFD) simulations geared toward chemical process equipment modeling has been developed.
Various regression-based active learning strategies are explored with these CFD simulators in-the-loop under the constraints of a limited function evaluation budget.
arXiv Detail & Related papers (2020-03-13T18:22:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.