A Mini Review on the utilization of Reinforcement Learning with OPC UA
- URL: http://arxiv.org/abs/2305.15113v2
- Date: Mon, 30 Oct 2023 11:52:42 GMT
- Title: A Mini Review on the utilization of Reinforcement Learning with OPC UA
- Authors: Simon Schindler, Martin Uray, Stefan Huber
- Abstract summary: Reinforcement Learning (RL) is a powerful machine learning paradigm that has been applied in various fields such as robotics, natural language processing and game playing.
The key to fully exploiting this potential is the seamless integration of RL into existing industrial systems.
This work serves to bridge this gap by providing a brief technical overview of both technologies and carrying out a semi-exhaustive literature review.
- Score: 0.9208007322096533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement Learning (RL) is a powerful machine learning paradigm that has
been applied in various fields such as robotics, natural language processing
and game playing achieving state-of-the-art results. Targeted to solve
sequential decision making problems, it is by design able to learn from
experience and therefore adapt to changing dynamic environments. These
capabilities make it a prime candidate for controlling and optimizing complex
processes in industry. The key to fully exploiting this potential is the
seamless integration of RL into existing industrial systems. The industrial
communication standard Open Platform Communications UnifiedArchitecture (OPC
UA) could bridge this gap. However, since RL and OPC UA are from different
fields,there is a need for researchers to bridge the gap between the two
technologies. This work serves to bridge this gap by providing a brief
technical overview of both technologies and carrying out a semi-exhaustive
literature review to gain insights on how RL and OPC UA are applied in
combination. With this survey, three main research topics have been identified,
following the intersection of RL with OPC UA. The results of the literature
review show that RL is a promising technology for the control and optimization
of industrial processes, but does not yet have the necessary standardized
interfaces to be deployed in real-world scenarios with reasonably low effort.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - LExCI: A Framework for Reinforcement Learning with Embedded Systems [1.8218298349840023]
We present a framework named LExCI, which bridges the gap between RL libraries and embedded systems.
It provides a free and open-source tool for training agents on embedded systems using the open-source library RLlib.
Its operability is demonstrated with two state-of-the-art RL-algorithms and a rapid control prototyping system.
arXiv Detail & Related papers (2023-12-05T13:06:25Z) - Deep reinforcement learning for machine scheduling: Methodology, the
state-of-the-art, and future directions [2.4541568670428915]
Machine scheduling aims to optimize job assignments to machines while adhering to manufacturing rules and job specifications.
Deep Reinforcement Learning (DRL), a key component of artificial general intelligence, has shown promise in various domains like gaming and robotics.
This paper offers a comprehensive review and comparison of DRL-based approaches, highlighting their methodology, applications, advantages, and limitations.
arXiv Detail & Related papers (2023-10-04T22:45:09Z) - An Architecture for Deploying Reinforcement Learning in Industrial
Environments [3.18294468240512]
We present an OPC UA based Operational Technology (OT)-aware RL architecture.
We define an OPC UA information model allowing for a generalized plug-and-play like approach for exchanging the RL agent.
By means of solving a toy example, we show that this architecture can be used to determine the optimal policy.
arXiv Detail & Related papers (2023-06-02T10:22:01Z) - Karolos: An Open-Source Reinforcement Learning Framework for Robot-Task
Environments [0.3867363075280544]
In reinforcement learning (RL) research, simulations enable benchmarks between algorithms.
In this paper, we introduce Karolos, a framework developed for robotic applications.
The code is open source and published on GitHub with the aim of promoting research of RL applications in robotics.
arXiv Detail & Related papers (2022-12-01T23:14:02Z) - Automated Reinforcement Learning (AutoRL): A Survey and Open Problems [92.73407630874841]
Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL.
We provide a common taxonomy, discuss each area in detail and pose open problems which would be of interest to researchers going forward.
arXiv Detail & Related papers (2022-01-11T12:41:43Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - Federated Learning for Industrial Internet of Things in Future
Industries [106.13524161081355]
The Industrial Internet of Things (IIoT) offers promising opportunities to transform the operation of industrial systems.
Recently, artificial intelligence (AI) has been widely utilized for realizing intelligent IIoT applications.
Federated Learning (FL) is particularly attractive for intelligent IIoT networks by coordinating multiple IIoT devices and machines to perform AI training at the network edge.
arXiv Detail & Related papers (2021-05-31T01:02:59Z) - Robust Multi-Modal Policies for Industrial Assembly via Reinforcement
Learning and Demonstrations: A Large-Scale Study [14.696027001985554]
We argue that it is the prohibitively large design space for Deep Reinforcement Learning (DRL) that are truly responsible for this lack of adoption.
This study suggests that DRL is capable of outperforming not only established engineered approaches, but the human motor system as well.
arXiv Detail & Related papers (2021-03-21T23:14:27Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.