Deep-Reinforcement-Learning-based Path Planning for Industrial Robots
using Distance Sensors as Observation
- URL: http://arxiv.org/abs/2301.05980v1
- Date: Sat, 14 Jan 2023 21:42:17 GMT
- Title: Deep-Reinforcement-Learning-based Path Planning for Industrial Robots
using Distance Sensors as Observation
- Authors: Teham Bhuiyan, Linh K\"astner, Yifan Hu, Benno Kutschank and Jens
Lambrecht
- Abstract summary: This paper proposes a Deep-Reinforcement-Learning-based motion planner for robotic manipulators.
We evaluate our model against state-of-the-art sampling-based planners in several experiments.
- Score: 7.656633127636852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industrial robots are widely used in various manufacturing environments due
to their efficiency in doing repetitive tasks such as assembly or welding. A
common problem for these applications is to reach a destination without
colliding with obstacles or other robot arms. Commonly used sampling-based path
planning approaches such as RRT require long computation times, especially in
complex environments. Furthermore, the environment in which they are employed
needs to be known beforehand. When utilizing the approaches in new
environments, a tedious engineering effort in setting hyperparameters needs to
be conducted, which is time- and cost-intensive. On the other hand, Deep
Reinforcement Learning has shown remarkable results in dealing with unknown
environments, generalizing new problem instances, and solving motion planning
problems efficiently. On that account, this paper proposes a
Deep-Reinforcement-Learning-based motion planner for robotic manipulators. We
evaluated our model against state-of-the-art sampling-based planners in several
experiments. The results show the superiority of our planner in terms of path
length and execution time.
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Mission-driven Exploration for Accelerated Deep Reinforcement Learning
with Temporal Logic Task Specifications [11.812602599752294]
We consider robots with unknown dynamics operating in environments with unknown structure.
Our goal is to synthesize a control policy that maximizes the probability of satisfying an automaton-encoded task.
We propose a novel DRL algorithm, which has the capability to learn control policies at a notably faster rate compared to similar methods.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Scalable Multi-robot Motion Planning for Congested Environments With
Topological Guidance [2.846144602096543]
Multi-robot motion planning (MRMP) is the problem of finding collision-free paths for a set of robots in a continuous state space.
We extend an existing single-robot motion planning method to leverage the improved efficiency provided by topological guidance.
We demonstrate our method's ability to efficiently plan paths in complex environments with many narrow passages, scaling to robot teams of size up to 25 times larger than existing methods.
arXiv Detail & Related papers (2022-10-13T16:26:01Z) - Overcoming Exploration: Deep Reinforcement Learning in Complex
Environments from Temporal Logic Specifications [2.8904578737516764]
We present a Deep Reinforcement Learning (DRL) algorithm for a task-guided robot with unknown continuous-time dynamics deployed in a large-scale complex environment.
Our framework is shown to significantly improve performance (effectiveness, efficiency) and exploration of robots tasked with complex missions in large-scale complex environments.
arXiv Detail & Related papers (2022-01-28T16:39:08Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Predicting Sample Collision with Neural Networks [5.713670854553366]
We present a framework to address the cost of expensive primitive operations in sampling-based motion planning.
We evaluate our framework on multiple planning problems with a variety of robots in 2D and 3D workspaces.
arXiv Detail & Related papers (2020-06-30T14:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.