VITaL Pretraining: Visuo-Tactile Pretraining for Tactile and Non-Tactile Manipulation Policies
- URL: http://arxiv.org/abs/2403.11898v2
- Date: Thu, 26 Sep 2024 18:47:04 GMT
- Title: VITaL Pretraining: Visuo-Tactile Pretraining for Tactile and Non-Tactile Manipulation Policies
- Authors: Abraham George, Selam Gano, Pranav Katragadda, Amir Barati Farimani,
- Abstract summary: We show how we can incorporate tactile information into imitation learning platforms to improve performance on manipulation tasks.
We show that incorporating visuo-tactile pretraining improves imitation learning performance, not only for tactile agents.
- Score: 8.187196813233362
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Tactile information is a critical tool for dexterous manipulation. As humans, we rely heavily on tactile information to understand objects in our environments and how to interact with them. We use touch not only to perform manipulation tasks but also to learn how to perform these tasks. Therefore, to create robotic agents that can learn to complete manipulation tasks at a human or super-human level of performance, we need to properly incorporate tactile information into both skill execution and skill learning. In this paper, we investigate how we can incorporate tactile information into imitation learning platforms to improve performance on manipulation tasks. We show that incorporating visuo-tactile pretraining improves imitation learning performance, not only for tactile agents (policies that use tactile information at inference), but also for non-tactile agents (policies that do not use tactile information at inference). For these non-tactile agents, pretraining with tactile information significantly improved performance (for example, improving the accuracy on USB plugging from 20% to 85%), reaching a level on par with visuo-tactile agents, and even surpassing them in some cases. For demonstration videos and access to our codebase, see the project website: https://sites.google.com/andrew.cmu.edu/visuo-tactile-pretraining
Related papers
- Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - Tactile-based Object Retrieval From Granular Media [17.340244278653785]
We introduce GEOTACT, a robotic manipulation method capable of retrieving objects buried in granular media.
We show that our problem formulation leads to the natural emergence of learned pushing behaviors that the manipulator uses to reduce uncertainty.
We also introduce a training curriculum that enables learning these behaviors in simulation, followed by zero-shot transfer to real hardware.
arXiv Detail & Related papers (2024-02-07T02:50:56Z) - DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity [12.508332341279177]
We introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch.
To achieve this, binary tactile sensors are implemented on one side of the robot hand to minimize the Sim2Real gap.
We demonstrate that object search and manipulation using tactile sensors is possible even in an environment without vision information.
arXiv Detail & Related papers (2024-01-23T05:37:32Z) - Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - See to Touch: Learning Tactile Dexterity through Visual Incentives [20.586023376454115]
We present Tactile Adaptation from Visual Incentives (TAVI), a new framework that enhances tactile-based dexterity.
On six challenging tasks, TAVI achieves a success rate of 73% using our four-fingered Allegro robot hand.
arXiv Detail & Related papers (2023-09-21T17:58:13Z) - Dexterity from Touch: Self-Supervised Pre-Training of Tactile
Representations with Robotic Play [15.780086627089885]
T-Dex is a new approach for tactile-based dexterity that operates in two phases.
In the first phase, we collect 2.5 hours of play data, which is used to train self-supervised tactile encoders.
In the second phase, given a handful of demonstrations for a dexterous task, we learn non-parametric policies that combine the tactile observations with visual ones.
arXiv Detail & Related papers (2023-03-21T17:59:20Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Visual-Tactile Multimodality for Following Deformable Linear Objects
Using Reinforcement Learning [15.758583731036007]
We study the problem of using vision and tactile inputs together to complete the task of following deformable linear objects.
We create a Reinforcement Learning agent using different sensing modalities and investigate how its behaviour can be boosted.
Our experiments show that the use of both vision and tactile inputs, together with proprioception, allows the agent to complete the task in up to 92% of cases.
arXiv Detail & Related papers (2022-03-31T21:59:08Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Learning Dexterous Grasping with Object-Centric Visual Affordances [86.49357517864937]
Dexterous robotic hands are appealing for their agility and human-like morphology.
We introduce an approach for learning dexterous grasping.
Our key idea is to embed an object-centric visual affordance model within a deep reinforcement learning loop.
arXiv Detail & Related papers (2020-09-03T04:00:40Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.