Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation
- URL: http://arxiv.org/abs/2312.13469v1
- Date: Wed, 20 Dec 2023 22:36:37 GMT
- Title: Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation
- Authors: Sudharshan Suresh, Haozhi Qi, Tingfan Wu, Taosha Fan, Luis Pineda,
Mike Lambeta, Jitendra Malik, Mrinal Kalakrishnan, Roberto Calandra, Michael
Kaess, Joseph Ortiz, Mustafa Mukadam
- Abstract summary: We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
- Score: 57.60490773016364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To achieve human-level dexterity, robots must infer spatial awareness from
multimodal sensing to reason over contact interactions. During in-hand
manipulation of novel objects, such spatial awareness involves estimating the
object's pose and shape. The status quo for in-hand perception primarily
employs vision, and restricts to tracking a priori known objects. Moreover,
visual occlusion of objects in-hand is imminent during manipulation, preventing
current systems to push beyond tasks without occlusion. We combine vision and
touch sensing on a multi-fingered hand to estimate an object's pose and shape
during in-hand manipulation. Our method, NeuralFeels, encodes object geometry
by learning a neural field online and jointly tracks it by optimizing a pose
graph problem. We study multimodal in-hand perception in simulation and the
real-world, interacting with different objects via a proprioception-driven
policy. Our experiments show final reconstruction F-scores of $81$% and average
pose drifts of $4.7\,\text{mm}$, further reduced to $2.3\,\text{mm}$ with known
CAD models. Additionally, we observe that under heavy visual occlusion we can
achieve up to $94$% improvements in tracking compared to vision-only methods.
Our results demonstrate that touch, at the very least, refines and, at the very
best, disambiguates visual estimates during in-hand manipulation. We release
our evaluation dataset of 70 experiments, FeelSight, as a step towards
benchmarking in this domain. Our neural representation driven by multimodal
sensing can serve as a perception backbone towards advancing robot dexterity.
Videos can be found on our project website
https://suddhu.github.io/neural-feels/
Related papers
- Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Visual-Tactile Multimodality for Following Deformable Linear Objects
Using Reinforcement Learning [15.758583731036007]
We study the problem of using vision and tactile inputs together to complete the task of following deformable linear objects.
We create a Reinforcement Learning agent using different sensing modalities and investigate how its behaviour can be boosted.
Our experiments show that the use of both vision and tactile inputs, together with proprioception, allows the agent to complete the task in up to 92% of cases.
arXiv Detail & Related papers (2022-03-31T21:59:08Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via
Implicit Representations [20.155920256334706]
We show that 3D reconstruction and grasp learning are two intimately connected tasks.
We propose to utilize the synergies between grasp affordance and 3D reconstruction through multi-task learning of a shared representation.
Our method outperforms baselines by over 10% in terms of grasp success rate.
arXiv Detail & Related papers (2021-04-04T05:46:37Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality [12.349749717823736]
In this paper, we propose a new data-driven framework for inside-out body tracking.
We first collect a large-scale motion capture dataset with both body and finger motions.
We then simulate the occlusion patterns in head-mounted camera views on the captured ground truth using a ray casting algorithm and learn a deep neural network to infer the occluded body parts.
arXiv Detail & Related papers (2020-11-12T09:31:09Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z) - Learning Object Placements For Relational Instructions by Hallucinating
Scene Representations [26.897316325189205]
We present a convolutional neural network for estimating pixelwise object placement probabilities for a set of spatial relations from a single input image.
Our method does not require ground truth data for the pixelwise relational probabilities or 3D models of the objects.
Results obtained using real-world data and human-robot experiments demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2020-01-23T12:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.