Improved Learning of Robot Manipulation Tasks via Tactile Intrinsic
Motivation
- URL: http://arxiv.org/abs/2102.11051v1
- Date: Mon, 22 Feb 2021 14:21:30 GMT
- Title: Improved Learning of Robot Manipulation Tasks via Tactile Intrinsic
Motivation
- Authors: Nikola Vulin, Sammy Christen, Stefan Stevsic and Otmar Hilliges
- Abstract summary: In sparse goal settings, an agent does not receive any positive feedback until randomly achieving the goal.
Inspired by touch-based exploration observed in children, we formulate an intrinsic reward based on the sum of forces between a robot's force sensors and manipulation objects.
We show that our solution accelerates the exploration and outperforms state-of-the-art methods on three fundamental robot manipulation benchmarks.
- Score: 40.81570120196115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we address the challenge of exploration in deep reinforcement
learning for robotic manipulation tasks. In sparse goal settings, an agent does
not receive any positive feedback until randomly achieving the goal, which
becomes infeasible for longer control sequences. Inspired by touch-based
exploration observed in children, we formulate an intrinsic reward based on the
sum of forces between a robot's force sensors and manipulation objects that
encourages physical interaction. Furthermore, we introduce contact-prioritized
experience replay, a sampling scheme that prioritizes contact rich episodes and
transitions. We show that our solution accelerates the exploration and
outperforms state-of-the-art methods on three fundamental robot manipulation
benchmarks.
Related papers
- MimicTouch: Learning Human's Control Strategy with Multi-Modal Tactile
Feedback [2.8582031759986775]
"MimicTouch" is a novel framework that mimics human's tactile-guided control strategy.
We employ online residual reinforcement learning on the physical robot.
This work will pave the way for a broader spectrum of tactile-guided robotic applications.
arXiv Detail & Related papers (2023-10-25T18:34:06Z) - Nonprehensile Planar Manipulation through Reinforcement Learning with
Multimodal Categorical Exploration [8.343657309038285]
Reinforcement Learning is a powerful framework for developing such robot controllers.
We propose a multimodal exploration approach through categorical distributions, which enables us to train planar pushing RL policies.
We show that the learned policies are robust to external disturbances and observation noise, and scale to tasks with multiple pushers.
arXiv Detail & Related papers (2023-08-04T16:55:00Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Active Exploration for Robotic Manipulation [40.39182660794481]
This paper proposes a model-based active exploration approach that enables efficient learning in sparse-reward robotic manipulation tasks.
We evaluate our proposed algorithm in simulation and on a real robot, trained from scratch with our method.
arXiv Detail & Related papers (2022-10-23T18:07:51Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Touch-based Curiosity for Sparse-Reward Tasks [15.766198618516137]
We use surprise from mismatches in touch feedback to guide exploration in hard sparse-reward reinforcement learning tasks.
Our approach, Touch-based Curiosity (ToC), learns what visible objects interactions are supposed to "feel" like.
We test our approach on a range of touch-intensive robot arm tasks.
arXiv Detail & Related papers (2021-04-01T12:49:29Z) - Reinforcement Learning Experiments and Benchmark for Solving Robotic
Reaching Tasks [0.0]
Reinforcement learning has been successfully applied to solving the reaching task with robotic arms.
It is shown that augmenting the reward signal with the Hindsight Experience Replay exploration technique increases the average return of off-policy agents.
arXiv Detail & Related papers (2020-11-11T14:00:49Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.