Planning Visual-Tactile Precision Grasps via Complementary Use of Vision
and Touch
- URL: http://arxiv.org/abs/2212.08604v1
- Date: Fri, 16 Dec 2022 17:32:56 GMT
- Title: Planning Visual-Tactile Precision Grasps via Complementary Use of Vision
and Touch
- Authors: Martin Matak and Tucker Hermans
- Abstract summary: We propose an approach to grasp planning that explicitly reasons about where the fingertips should contact the estimated object surface.
Key to our method's success is the use of visual surface estimation for initial planning to encode the contact constraint.
We show that our method successfully synthesises and executes precision grasps for previously unseen objects using surface estimates from a single camera view.
- Score: 9.31776719215139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliably planning fingertip grasps for multi-fingered hands lies as a key
challenge for many tasks including tool use, insertion, and dexterous in-hand
manipulation. This task becomes even more difficult when the robot lacks an
accurate model of the object to be grasped. Tactile sensing offers a promising
approach to account for uncertainties in object shape. However, current robotic
hands tend to lack full tactile coverage. As such, a problem arises of how to
plan and execute grasps for multi-fingered hands such that contact is made with
the area covered by the tactile sensors. To address this issue, we propose an
approach to grasp planning that explicitly reasons about where the fingertips
should contact the estimated object surface while maximizing the probability of
grasp success. Key to our method's success is the use of visual surface
estimation for initial planning to encode the contact constraint. The robot
then executes this plan using a tactile-feedback controller that enables the
robot to adapt to online estimates of the object's surface to correct for
errors in the initial plan. Importantly, the robot never explicitly integrates
object pose or surface estimates between visual and tactile sensing, instead it
uses the two modalities in complementary ways. Vision guides the robots motion
prior to contact; touch updates the plan when contact occurs differently than
predicted from vision. We show that our method successfully synthesises and
executes precision grasps for previously unseen objects using surface estimates
from a single camera view. Further, our approach outperforms a state of the art
multi-fingered grasp planner, while also beating several baselines we propose.
Related papers
- Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - PseudoTouch: Efficiently Imaging the Surface Feel of Objects for Robotic Manipulation [8.997347199266592]
Our goal is to equip robots with a similar capability, which we term ourmodel.
We frame this problem as the task of learning a low-dimensional visual-tactile embedding.
Using ReSkin, we collect and train PseudoTouch on a dataset comprising aligned tactile and visual data pairs.
We demonstrate the efficacy of PseudoTouch through its application to two downstream tasks: object recognition and grasp stability prediction.
arXiv Detail & Related papers (2024-03-22T10:51:31Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Tactile Estimation of Extrinsic Contact Patch for Stable Placement [64.06243248525823]
We present the design of feedback skills for robots that must learn to stack complex-shaped objects on top of each other.
We estimate the contact patch between a grasped object and its environment using force and tactile observations.
arXiv Detail & Related papers (2023-09-25T21:51:48Z) - Attention for Robot Touch: Tactile Saliency Prediction for Robust
Sim-to-Real Tactile Control [12.302685367517718]
High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks.
We study a new concept: textittactile saliency for robot touch, inspired by the human touch attention mechanism from neuroscience.
arXiv Detail & Related papers (2023-07-26T21:19:45Z) - Dexterity from Touch: Self-Supervised Pre-Training of Tactile
Representations with Robotic Play [15.780086627089885]
T-Dex is a new approach for tactile-based dexterity that operates in two phases.
In the first phase, we collect 2.5 hours of play data, which is used to train self-supervised tactile encoders.
In the second phase, given a handful of demonstrations for a dexterous task, we learn non-parametric policies that combine the tactile observations with visual ones.
arXiv Detail & Related papers (2023-03-21T17:59:20Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Semi-Supervised Disentanglement of Tactile Contact~Geometry from
Sliding-Induced Shear [12.004939546183355]
The sense of touch is fundamental to human dexterity.
When mimicked in robotic touch, particularly by use of soft optical tactile sensors, it suffers from distortion due to motion-dependent shear.
In this work, we pursue a semi-supervised approach to remove shear while preserving contact-only information.
arXiv Detail & Related papers (2022-08-26T08:30:19Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.