Dexterity from Touch: Self-Supervised Pre-Training of Tactile
Representations with Robotic Play
- URL: http://arxiv.org/abs/2303.12076v1
- Date: Tue, 21 Mar 2023 17:59:20 GMT
- Title: Dexterity from Touch: Self-Supervised Pre-Training of Tactile
Representations with Robotic Play
- Authors: Irmak Guzey, Ben Evans, Soumith Chintala, Lerrel Pinto
- Abstract summary: T-Dex is a new approach for tactile-based dexterity that operates in two phases.
In the first phase, we collect 2.5 hours of play data, which is used to train self-supervised tactile encoders.
In the second phase, given a handful of demonstrations for a dexterous task, we learn non-parametric policies that combine the tactile observations with visual ones.
- Score: 15.780086627089885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Teaching dexterity to multi-fingered robots has been a longstanding challenge
in robotics. Most prominent work in this area focuses on learning controllers
or policies that either operate on visual observations or state estimates
derived from vision. However, such methods perform poorly on fine-grained
manipulation tasks that require reasoning about contact forces or about objects
occluded by the hand itself. In this work, we present T-Dex, a new approach for
tactile-based dexterity, that operates in two phases. In the first phase, we
collect 2.5 hours of play data, which is used to train self-supervised tactile
encoders. This is necessary to bring high-dimensional tactile readings to a
lower-dimensional embedding. In the second phase, given a handful of
demonstrations for a dexterous task, we learn non-parametric policies that
combine the tactile observations with visual ones. Across five challenging
dexterous tasks, we show that our tactile-based dexterity models outperform
purely vision and torque-based models by an average of 1.7X. Finally, we
provide a detailed analysis on factors critical to T-Dex including the
importance of play data, architectures, and representation learning.
Related papers
- Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity [12.508332341279177]
We introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch.
To achieve this, binary tactile sensors are implemented on one side of the robot hand to minimize the Sim2Real gap.
We demonstrate that object search and manipulation using tactile sensors is possible even in an environment without vision information.
arXiv Detail & Related papers (2024-01-23T05:37:32Z) - Robot Synesthesia: In-Hand Manipulation with Visuotactile Sensing [15.970078821894758]
We introduce a system that leverages visual and tactile sensory inputs to enable dexterous in-hand manipulation.
Robot Synesthesia is a novel point cloud-based tactile representation inspired by human tactile-visual synesthesia.
arXiv Detail & Related papers (2023-12-04T12:35:43Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Touch and Go: Learning from Human-Collected Vision and Touch [16.139106833276]
We propose a dataset with paired visual and tactile data called Touch and Go.
Human data collectors probe objects in natural environments using tactile sensors.
Our dataset spans a large number of "in the wild" objects and scenes.
arXiv Detail & Related papers (2022-11-22T18:59:32Z) - Visual-Tactile Multimodality for Following Deformable Linear Objects
Using Reinforcement Learning [15.758583731036007]
We study the problem of using vision and tactile inputs together to complete the task of following deformable linear objects.
We create a Reinforcement Learning agent using different sensing modalities and investigate how its behaviour can be boosted.
Our experiments show that the use of both vision and tactile inputs, together with proprioception, allows the agent to complete the task in up to 92% of cases.
arXiv Detail & Related papers (2022-03-31T21:59:08Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.