Integrated Object Deformation and Contact Patch Estimation from
Visuo-Tactile Feedback
- URL: http://arxiv.org/abs/2305.14470v1
- Date: Tue, 23 May 2023 18:53:24 GMT
- Title: Integrated Object Deformation and Contact Patch Estimation from
Visuo-Tactile Feedback
- Authors: Mark Van der Merwe, Youngsun Wi, Dmitry Berenson, Nima Fazeli
- Abstract summary: We propose a representation that jointly models object deformations and contact patches from visuo-tactile feedback.
We propose a neural network architecture to learn a NDCF, and train it using simulated data.
We demonstrate that the learned NDCF transfers directly to the real-world without the need for fine-tuning.
- Score: 8.420670642409219
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning over the interplay between object deformation and force
transmission through contact is central to the manipulation of compliant
objects. In this paper, we propose Neural Deforming Contact Field (NDCF), a
representation that jointly models object deformations and contact patches from
visuo-tactile feedback using implicit representations. Representing the object
geometry and contact with the environment implicitly allows a single model to
predict contact patches of varying complexity. Additionally, learning geometry
and contact simultaneously allows us to enforce physical priors, such as
ensuring contacts lie on the surface of the object. We propose a neural network
architecture to learn a NDCF, and train it using simulated data. We then
demonstrate that the learned NDCF transfers directly to the real-world without
the need for fine-tuning. We benchmark our proposed approach against a baseline
representing geometry and contact patches with point clouds. We find that NDCF
performs better on simulated data and in transfer to the real-world.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - Physics-Encoded Graph Neural Networks for Deformation Prediction under
Contact [87.69278096528156]
In robotics, it's crucial to understand object deformation during tactile interactions.
We introduce a method using Physics-Encoded Graph Neural Networks (GNNs) for such predictions.
We've made our code and dataset public to advance research in robotic simulation and grasping.
arXiv Detail & Related papers (2024-02-05T19:21:52Z) - DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via
Physics Simulation [81.11585774044848]
We present DeepSimHO, a novel deep-learning pipeline that combines forward physics simulation and backward gradient approximation with a neural network.
Our method noticeably improves the stability of the estimation and achieves superior efficiency over test-time optimization.
arXiv Detail & Related papers (2023-10-11T05:34:36Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Nonrigid Object Contact Estimation With Regional Unwrapping Transformer [16.988812837693203]
Acquiring contact patterns between hands and nonrigid objects is a common concern in the vision and robotics community.
Existing learning-based methods focus more on contact with rigid ones from monocular images.
We propose a novel hand-object contact representation called RUPs, which unwraps the roughly estimated hand-object surfaces as multiple high-resolution 2D regional profiles.
arXiv Detail & Related papers (2023-08-27T11:37:26Z) - Visual-Tactile Sensing for In-Hand Object Reconstruction [38.42487660352112]
We propose a visual-tactile in-hand object reconstruction framework textbfVTacO, and extend it to textbfVTacOH for hand-object reconstruction.
A simulation environment, VT-Sim, supports generating hand-object interaction for both rigid and deformable objects.
arXiv Detail & Related papers (2023-03-25T15:16:31Z) - Stability-driven Contact Reconstruction From Monocular Color Images [7.427212296770506]
Physical contact provides additional constraints for hand-object state reconstruction.
Existing methods optimize the hand-object contact driven by distance threshold or prior from contact-labeled datasets.
Our key idea is to reconstruct the contact pattern directly from monocular images, and then utilize the physical stability criterion in the simulation to optimize it.
arXiv Detail & Related papers (2022-05-02T12:23:06Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Locally Aware Piecewise Transformation Fields for 3D Human Mesh
Registration [67.69257782645789]
We propose piecewise transformation fields that learn 3D translation vectors to map any query point in posed space to its correspond position in rest-pose space.
We show that fitting parametric models with poses by our network results in much better registration quality, especially for extreme poses.
arXiv Detail & Related papers (2021-04-16T15:16:09Z) - Tactile Object Pose Estimation from the First Touch with Geometric
Contact Rendering [19.69677059281393]
We present an approach to tactile pose estimation from the first touch for known objects.
We create an object-agnostic map from real tactile observations to contact shapes.
For a new object with known geometry, we learn a tailored perception model completely in simulation.
arXiv Detail & Related papers (2020-12-09T18:00:35Z) - Learning the sense of touch in simulation: a sim-to-real strategy for
vision-based tactile sensing [1.9981375888949469]
This paper focuses on a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface.
A strategy is proposed to train a tailored deep neural network entirely from the simulation data.
The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data.
arXiv Detail & Related papers (2020-03-05T14:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.