Karolos: An Open-Source Reinforcement Learning Framework for Robot-Task
Environments
- URL: http://arxiv.org/abs/2212.00906v1
- Date: Thu, 1 Dec 2022 23:14:02 GMT
- Title: Karolos: An Open-Source Reinforcement Learning Framework for Robot-Task
Environments
- Authors: Christian Bitter, Timo Thun, Tobias Meisen
- Abstract summary: In reinforcement learning (RL) research, simulations enable benchmarks between algorithms.
In this paper, we introduce Karolos, a framework developed for robotic applications.
The code is open source and published on GitHub with the aim of promoting research of RL applications in robotics.
- Score: 0.3867363075280544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In reinforcement learning (RL) research, simulations enable benchmarks
between algorithms, as well as prototyping and hyper-parameter tuning of
agents. In order to promote RL both in research and real-world applications,
frameworks are required which are on the one hand efficient in terms of running
experiments as fast as possible. On the other hand, they must be flexible
enough to allow the integration of newly developed optimization techniques,
e.g. new RL algorithms, which are continuously put forward by an active
research community. In this paper, we introduce Karolos, a RL framework
developed for robotic applications, with a particular focus on transfer
scenarios with varying robot-task combinations reflected in a modular
environment architecture. In addition, we provide implementations of
state-of-the-art RL algorithms along with common learning-facilitating
enhancements, as well as an architecture to parallelize environments across
multiple processes to significantly speed up experiments. The code is open
source and published on GitHub with the aim of promoting research of RL
applications in robotics.
Related papers
- SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Scilab-RL: A software framework for efficient reinforcement learning and
cognitive modeling research [0.0]
Scilab-RL is a software framework for efficient research in cognitive modeling and reinforcement learning for robotic agents.
It focuses on goal-conditioned reinforcement learning using Stable Baselines 3 and the OpenAI gym interface.
We describe how these features enable researchers to conduct experiments with minimal time effort, thus maximizing research output.
arXiv Detail & Related papers (2024-01-25T19:49:02Z) - RLLTE: Long-Term Evolution Project of Reinforcement Learning [48.181733263496746]
We present RLLTE: a long-term evolution, extremely modular, and open-source framework for reinforcement learning research and application.
Beyond delivering top-notch algorithm implementations, RLLTE also serves as a toolkit for developing algorithms.
RLLTE is expected to set standards for RL engineering practice and be highly stimulative for industry and academia.
arXiv Detail & Related papers (2023-09-28T12:30:37Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - A Memetic Algorithm with Reinforcement Learning for Sociotechnical
Production Scheduling [0.0]
This article presents a memetic algorithm with applying deep reinforcement learning (DRL) to flexible job shop scheduling problems (DRC-FJSSP)
From research projects in industry, we recognize the need to consider flexible machines, flexible human workers, worker capabilities, setup and processing operations, material arrival times, complex job paths with parallel tasks for bill of material manufacturing, sequence-dependent setup times and (partially) automated tasks in human-machine-collaboration.
arXiv Detail & Related papers (2022-12-21T11:24:32Z) - FORLORN: A Framework for Comparing Offline Methods and Reinforcement
Learning for Optimization of RAN Parameters [0.0]
This paper introduces a new framework for benchmarking the performance of an RL agent in network environments simulated with ns-3.
Within this framework, we demonstrate that an RL agent without domain-specific knowledge can learn how to efficiently adjust Radio Access Network (RAN) parameters to match offline optimization in static scenarios.
arXiv Detail & Related papers (2022-09-08T12:58:09Z) - Hyperparameter Tuning for Deep Reinforcement Learning Applications [0.3553493344868413]
We propose a distributed variable-length genetic algorithm framework to tune hyperparameters for various RL applications.
Our results show that with more generations, optimal solutions that require fewer training episodes and are computationally cheap while being more robust for deployment.
arXiv Detail & Related papers (2022-01-26T20:43:13Z) - Accelerating Robotic Reinforcement Learning via Parameterized Action
Primitives [92.0321404272942]
Reinforcement learning can be used to build general-purpose robotic systems.
However, training RL agents to solve robotics tasks still remains challenging.
In this work, we manually specify a library of robot action primitives (RAPS), parameterized with arguments that are learned by an RL policy.
We find that our simple change to the action interface substantially improves both the learning efficiency and task performance.
arXiv Detail & Related papers (2021-10-28T17:59:30Z) - SurRoL: An Open-source Reinforcement Learning Centered and dVRK
Compatible Platform for Surgical Robot Learning [78.76052604441519]
SurRoL is an RL-centered simulation platform for surgical robot learning compatible with the da Vinci Research Kit (dVRK)
Ten learning-based surgical tasks are built in the platform, which are common in the real autonomous surgical execution.
We evaluate SurRoL using RL algorithms in simulation, provide in-depth analysis, deploy the trained policies on the real dVRK, and show that our SurRoL achieves better transferability in the real world.
arXiv Detail & Related papers (2021-08-30T07:43:47Z) - Scenic4RL: Programmatic Modeling and Generation of Reinforcement
Learning Environments [89.04823188871906]
Generation of diverse realistic scenarios is challenging for real-time strategy (RTS) environments.
Most of the existing simulators rely on randomly generating the environments.
We introduce the benefits of adopting an existing formal scenario specification language, SCENIC, to assist researchers.
arXiv Detail & Related papers (2021-06-18T21:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.