Robotic Arm Control and Task Training through Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2005.02632v1
- Date: Wed, 6 May 2020 07:34:28 GMT
- Title: Robotic Arm Control and Task Training through Deep Reinforcement
Learning
- Authors: Andrea Franceschetti, Elisa Tosello, Nicola Castaman and Stefano
Ghidoni
- Abstract summary: We show that Trust Region Policy Optimization and DeepQ-Network with Normalized Advantage Functions perform better than Deep Deterministic Policy Gradient and Vanilla Policy Gradient.
Real-world experiments let show that our polices, if correctly trained on simulation, can be transferred and executed in a real environment with almost no changes.
- Score: 6.249276977046449
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a detailed and extensive comparison of the Trust Region
Policy Optimization and DeepQ-Network with Normalized Advantage Functions with
respect to other state of the art algorithms, namely Deep Deterministic Policy
Gradient and Vanilla Policy Gradient. Comparisons demonstrate that the former
have better performances then the latter when asking robotic arms to accomplish
manipulation tasks such as reaching a random target pose and pick &placing an
object. Both simulated and real-world experiments are provided. Simulation lets
us show the procedures that we adopted to precisely estimate the algorithms
hyper-parameters and to correctly design good policies. Real-world experiments
let show that our polices, if correctly trained on simulation, can be
transferred and executed in a real environment with almost no changes.
Related papers
- Learning Optimal Deterministic Policies with Stochastic Policy Gradients [62.81324245896716]
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems.
In common practice, convergence (hyper)policies are learned only to deploy their deterministic version.
We show how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy.
arXiv Detail & Related papers (2024-05-03T16:45:15Z) - Robust Visual Sim-to-Real Transfer for Robotic Manipulation [79.66851068682779]
Learning visuomotor policies in simulation is much safer and cheaper than in the real world.
However, due to discrepancies between the simulated and real data, simulator-trained policies often fail when transferred to real robots.
One common approach to bridge the visual sim-to-real domain gap is domain randomization (DR)
arXiv Detail & Related papers (2023-07-28T05:47:24Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Robotic Lever Manipulation using Hindsight Experience Replay and Shapley
Additive Explanations [0.0]
This paper deals with robotic lever control using Explainable Deep Reinforcement Learning.
First, we train a policy by using the Deep Deterministic Policy Gradient algorithm and the Hindsight Experience Replay technique.
We then transfer the policy to the real-world environment, where it achieves comparable performance to the simulated environments for most episodes.
To explain the decisions of the policy we use the SHAP method to create an explanation model based on the episodes done in the real-world environment.
arXiv Detail & Related papers (2021-10-07T09:24:34Z) - Robust Value Iteration for Continuous Control Tasks [99.00362538261972]
When transferring a control policy from simulation to a physical system, the policy needs to be robust to variations in the dynamics to perform well.
We present Robust Fitted Value Iteration, which uses dynamic programming to compute the optimal value function on the compact state domain.
We show that robust value is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.
arXiv Detail & Related papers (2021-05-25T19:48:35Z) - A User's Guide to Calibrating Robotics Simulators [54.85241102329546]
This paper proposes a set of benchmarks and a framework for the study of various algorithms aimed to transfer models and policies learnt in simulation to the real world.
We conduct experiments on a wide range of well known simulated environments to characterize and offer insights into the performance of different algorithms.
Our analysis can be useful for practitioners working in this area and can help make informed choices about the behavior and main properties of sim-to-real algorithms.
arXiv Detail & Related papers (2020-11-17T22:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.