Visual-Tactile Sensing for In-Hand Object Reconstruction
- URL: http://arxiv.org/abs/2303.14498v1
- Date: Sat, 25 Mar 2023 15:16:31 GMT
- Title: Visual-Tactile Sensing for In-Hand Object Reconstruction
- Authors: Wenqiang Xu, Zhenjun Yu, Han Xue, Ruolin Ye, Siqiong Yao, Cewu Lu
- Abstract summary: We propose a visual-tactile in-hand object reconstruction framework textbfVTacO, and extend it to textbfVTacOH for hand-object reconstruction.
A simulation environment, VT-Sim, supports generating hand-object interaction for both rigid and deformable objects.
- Score: 38.42487660352112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tactile sensing is one of the modalities humans rely on heavily to perceive
the world. Working with vision, this modality refines local geometry structure,
measures deformation at the contact area, and indicates the hand-object contact
state.
With the availability of open-source tactile sensors such as DIGIT, research
on visual-tactile learning is becoming more accessible and reproducible.
Leveraging this tactile sensor, we propose a novel visual-tactile in-hand
object reconstruction framework \textbf{VTacO}, and extend it to
\textbf{VTacOH} for hand-object reconstruction. Since our method can support
both rigid and deformable object reconstruction, no existing benchmarks are
proper for the goal. We propose a simulation environment, VT-Sim, which
supports generating hand-object interaction for both rigid and deformable
objects. With VT-Sim, we generate a large-scale training dataset and evaluate
our method on it. Extensive experiments demonstrate that our proposed method
can outperform the previous baseline methods qualitatively and quantitatively.
Finally, we directly apply our model trained in simulation to various
real-world test cases, which display qualitative results.
Codes, models, simulation environment, and datasets are available at
\url{https://sites.google.com/view/vtaco/}.
Related papers
- Learning Explicit Contact for Implicit Reconstruction of Hand-held
Objects from Monocular Images [59.49985837246644]
We show how to model contacts in an explicit way to benefit the implicit reconstruction of hand-held objects.
In the first part, we propose a new subtask of directly estimating 3D hand-object contacts from a single image.
In the second part, we introduce a novel method to diffuse estimated contact states from the hand mesh surface to nearby 3D space.
arXiv Detail & Related papers (2023-05-31T17:59:26Z) - Integrated Object Deformation and Contact Patch Estimation from
Visuo-Tactile Feedback [8.420670642409219]
We propose a representation that jointly models object deformations and contact patches from visuo-tactile feedback.
We propose a neural network architecture to learn a NDCF, and train it using simulated data.
We demonstrate that the learned NDCF transfers directly to the real-world without the need for fine-tuning.
arXiv Detail & Related papers (2023-05-23T18:53:24Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Active 3D Shape Reconstruction from Vision and Touch [66.08432412497443]
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.
In 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings.
We introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile priors to guide the shape exploration; and 3) a set of data-driven solutions with either tactile or visuo
arXiv Detail & Related papers (2021-07-20T15:56:52Z) - SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object
Manipulation [15.477950393687836]
We present SoftGym, a set of open-source simulated benchmarks for manipulating deformable objects.
We evaluate a variety of algorithms on these tasks and highlight challenges for reinforcement learning algorithms.
arXiv Detail & Related papers (2020-11-14T03:46:59Z) - 3D Shape Reconstruction from Vision and Touch [62.59044232597045]
In 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored.
We introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects.
arXiv Detail & Related papers (2020-07-07T20:20:33Z) - Learning the sense of touch in simulation: a sim-to-real strategy for
vision-based tactile sensing [1.9981375888949469]
This paper focuses on a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface.
A strategy is proposed to train a tailored deep neural network entirely from the simulation data.
The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data.
arXiv Detail & Related papers (2020-03-05T14:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.