Curriculum Is More Influential Than Haptic Information During Reinforcement Learning of Object Manipulation Against Gravity
- URL: http://arxiv.org/abs/2407.09986v1
- Date: Sat, 13 Jul 2024 19:23:11 GMT
- Title: Curriculum Is More Influential Than Haptic Information During Reinforcement Learning of Object Manipulation Against Gravity
- Authors: Pegah Ojaghi, Romina Mir, Ali Marjaninejad, Andrew Erwin, Michael Wehner, Francisco J Valero-Cueva,
- Abstract summary: Learning to lift and rotate objects with the fingertips is necessary for autonomous in-hand dexterous manipulation.
We investigate the role of curriculum learning and haptic feedback in enabling the learning of dexterous manipulation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to lift and rotate objects with the fingertips is necessary for autonomous in-hand dexterous manipulation. In our study, we explore the impact of various factors on successful learning strategies for this task. Specifically, we investigate the role of curriculum learning and haptic feedback in enabling the learning of dexterous manipulation. Using model-free Reinforcement Learning, we compare different curricula and two haptic information modalities (No-tactile vs. 3D-force sensing) for lifting and rotating a ball against gravity with a three-fingered simulated robotic hand with no visual input. Note that our best results were obtained when we used a novel curriculum-based learning rate scheduler, which adjusts the linearly-decaying learning rate when the reward is changed as it accelerates convergence to higher rewards. Our findings demonstrate that the choice of curriculum greatly biases the acquisition of different features of dexterous manipulation. Surprisingly, successful learning can be achieved even in the absence of tactile feedback, challenging conventional assumptions about the necessity of haptic information for dexterous manipulation tasks. We demonstrate the generalizability of our results to balls of different weights and sizes, underscoring the robustness of our learning approach. This work, therefore, emphasizes the importance of the choice curriculum and challenges long-held notions about the need for tactile information to autonomously learn in-hand dexterous manipulation.
Related papers
- Lessons from Learning to Spin "Pens" [51.9182692233916]
In this work, we push the boundaries of learning-based in-hand manipulation systems by demonstrating the capability to spin pen-like objects.
We first use reinforcement learning to train an oracle policy with privileged information and generate a high-fidelity trajectory dataset in simulation.
We then fine-tune the sensorimotor policy using these real-world trajectories to adapt it to the real world dynamics.
arXiv Detail & Related papers (2024-07-26T17:56:01Z) - Exploring CausalWorld: Enhancing robotic manipulation via knowledge transfer and curriculum learning [6.683222869973898]
This study explores a learning-based tri-finger robotic arm manipulating task, which requires complex movements and coordination among the fingers.
By employing reinforcement learning, we train an agent to acquire the necessary skills for proficient manipulation.
Two knowledge transfer strategies, fine-tuning and curriculum learning, were utilized within the soft actor-critic architecture.
arXiv Detail & Related papers (2024-03-25T23:19:19Z) - Learning to Transfer In-Hand Manipulations Using a Greedy Shape
Curriculum [79.6027464700869]
We show that natural and robust in-hand manipulation of simple objects in a dynamic simulation can be learned from a high quality motion capture example.
We propose a simple greedy curriculum search algorithm that can successfully apply to a range of objects such as a teapot, bunny, bottle, train, and elephant.
arXiv Detail & Related papers (2023-03-14T17:08:19Z) - Teacher-student curriculum learning for reinforcement learning [1.7259824817932292]
Reinforcement learning (rl) is a popular paradigm for sequential decision making problems.
The sample inefficiency of deep reinforcement learning methods is a significant obstacle when applying rl to real-world problems.
We propose a teacher-student curriculum learning setting where we simultaneously train a teacher that selects tasks for the student while the student learns how to solve the selected task.
arXiv Detail & Related papers (2022-10-31T14:45:39Z) - Self-Supervised Learning of Multi-Object Keypoints for Robotic
Manipulation [8.939008609565368]
In this paper, we demonstrate the efficacy of learning image keypoints via the Dense Correspondence pretext task for downstream policy learning.
We evaluate our approach on diverse robot manipulation tasks, compare it to other visual representation learning approaches, and demonstrate its flexibility and effectiveness for sample-efficient policy learning.
arXiv Detail & Related papers (2022-05-17T13:15:07Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z) - The utility of tactile force to autonomous learning of in-hand
manipulation is task-dependent [0.0]
This paper evaluates the role of tactile information on autonomous learning of manipulation with a simulated 3-finger tendon-driven hand.
We compare the ability of the same learning algorithm to learn two manipulation tasks with three levels of tactile sensing.
We conclude that, in general, sensory input is useful to learning only when it is relevant to the task.
arXiv Detail & Related papers (2020-02-05T06:24:40Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.