An Architecture for Deploying Reinforcement Learning in Industrial
Environments
- URL: http://arxiv.org/abs/2306.01420v1
- Date: Fri, 2 Jun 2023 10:22:01 GMT
- Title: An Architecture for Deploying Reinforcement Learning in Industrial
Environments
- Authors: Georg Sch\"afer, Reuf Kozlica, Stefan Wegenkittl, Stefan Huber
- Abstract summary: We present an OPC UA based Operational Technology (OT)-aware RL architecture.
We define an OPC UA information model allowing for a generalized plug-and-play like approach for exchanging the RL agent.
By means of solving a toy example, we show that this architecture can be used to determine the optimal policy.
- Score: 3.18294468240512
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Industry 4.0 is driven by demands like shorter time-to-market, mass
customization of products, and batch size one production. Reinforcement
Learning (RL), a machine learning paradigm shown to possess a great potential
in improving and surpassing human level performance in numerous complex tasks,
allows coping with the mentioned demands. In this paper, we present an OPC UA
based Operational Technology (OT)-aware RL architecture, which extends the
standard RL setting, combining it with the setting of digital twins. Moreover,
we define an OPC UA information model allowing for a generalized plug-and-play
like approach for exchanging the RL agent used. In conclusion, we demonstrate
and evaluate the architecture, by creating a proof of concept. By means of
solving a toy example, we show that this architecture can be used to determine
the optimal policy using a real control system.
Related papers
- Inference Optimization of Foundation Models on AI Accelerators [68.24450520773688]
Powerful foundation models, including large language models (LLMs), with Transformer architectures have ushered in a new era of Generative AI.
As the number of model parameters reaches to hundreds of billions, their deployment incurs prohibitive inference costs and high latency in real-world scenarios.
This tutorial offers a comprehensive discussion on complementary inference optimization techniques using AI accelerators.
arXiv Detail & Related papers (2024-07-12T09:24:34Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - LExCI: A Framework for Reinforcement Learning with Embedded Systems [1.8218298349840023]
We present a framework named LExCI, which bridges the gap between RL libraries and embedded systems.
It provides a free and open-source tool for training agents on embedded systems using the open-source library RLlib.
Its operability is demonstrated with two state-of-the-art RL-algorithms and a rapid control prototyping system.
arXiv Detail & Related papers (2023-12-05T13:06:25Z) - Serving Deep Learning Model in Relational Databases [70.53282490832189]
Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains.
We highlight three pivotal paradigms: The state-of-the-art DL-centric architecture offloads DL computations to dedicated DL frameworks.
The potential UDF-centric architecture encapsulates one or more tensor computations into User Defined Functions (UDFs) within the relational database management system (RDBMS)
arXiv Detail & Related papers (2023-10-07T06:01:35Z) - A Mini Review on the utilization of Reinforcement Learning with OPC UA [0.9208007322096533]
Reinforcement Learning (RL) is a powerful machine learning paradigm that has been applied in various fields such as robotics, natural language processing and game playing.
The key to fully exploiting this potential is the seamless integration of RL into existing industrial systems.
This work serves to bridge this gap by providing a brief technical overview of both technologies and carrying out a semi-exhaustive literature review.
arXiv Detail & Related papers (2023-05-24T13:03:48Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Architecting and Visualizing Deep Reinforcement Learning Models [77.34726150561087]
Deep Reinforcement Learning (DRL) is a theory that aims to teach computers how to communicate with each other.
In this paper, we present a new Atari Pong game environment, a policy gradient based DRL model, a real-time network visualization, and an interactive display to help build intuition and awareness of the mechanics of DRL inference.
arXiv Detail & Related papers (2021-12-02T17:48:26Z) - RL-DARTS: Differentiable Architecture Search for Reinforcement Learning [62.95469460505922]
We introduce RL-DARTS, one of the first applications of Differentiable Architecture Search (DARTS) in reinforcement learning (RL)
By replacing the image encoder with a DARTS supernet, our search method is sample-efficient, requires minimal extra compute resources, and is also compatible with off-policy and on-policy RL algorithms, needing only minor changes in preexisting code.
We show that the supernet gradually learns better cells, leading to alternative architectures which can be highly competitive against manually designed policies, but also verify previous design choices for RL policies.
arXiv Detail & Related papers (2021-06-04T03:08:43Z) - Integrating Distributed Architectures in Highly Modular RL Libraries [4.297070083645049]
Most popular reinforcement learning libraries advocate for highly modular agent composability.
We propose a versatile approach that allows the definition of RL agents at different scales through independent reusable components.
arXiv Detail & Related papers (2020-07-06T10:22:07Z) - The Adversarial Resilience Learning Architecture for AI-based Modelling,
Exploration, and Operation of Complex Cyber-Physical Systems [0.0]
We describe the concept of Adversarial Learning (ARL) that formulates a new approach to complex environment checking and resilient operation.
The quintessence of ARL lies in both agents exploring the system and training each other without any domain knowledge.
Here, we introduce the ARL software architecture that allows to use a wide range of model-free as well as model-based DRL-based algorithms.
arXiv Detail & Related papers (2020-05-27T19:19:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.