Probabilistic Surface Friction Estimation Based on Visual and Haptic
Measurements
- URL: http://arxiv.org/abs/2010.08277v3
- Date: Fri, 12 Mar 2021 15:14:16 GMT
- Title: Probabilistic Surface Friction Estimation Based on Visual and Haptic
Measurements
- Authors: Tran Nguyen Le and Francesco Verdoja and Fares J. Abu-Dakka and Ville
Kyrki
- Abstract summary: We propose a joint visuo-haptic object model that enables the estimation of surface friction coefficient over an entire object.
We demonstrate the validity of the proposed method by showing its ability to estimate varying friction coefficients on a range of real multi-material objects.
- Score: 17.477520575909193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately modeling local surface properties of objects is crucial to many
robotic applications, from grasping to material recognition. Surface properties
like friction are however difficult to estimate, as visual observation of the
object does not convey enough information over these properties. In contrast,
haptic exploration is time consuming as it only provides information relevant
to the explored parts of the object. In this work, we propose a joint
visuo-haptic object model that enables the estimation of surface friction
coefficient over an entire object by exploiting the correlation of visual and
haptic information, together with a limited haptic exploration by a robotic
arm. We demonstrate the validity of the proposed method by showing its ability
to estimate varying friction coefficients on a range of real multi-material
objects. Furthermore, we illustrate how the estimated friction coefficients can
improve grasping success rate by guiding a grasp planner toward high friction
areas.
Related papers
- Tailoring Frictional Properties of Surfaces Using Diffusion Models [0.0]
This Letter introduces an approach for precisely designing surface friction properties using a conditional generative machine learning model.
We created a dataset of synthetic surfaces with frictional properties determined by molecular dynamics simulations, which trained the DDPM to predict surface structures from desired frictional outcomes.
arXiv Detail & Related papers (2024-01-05T09:15:07Z) - On the importance of catalyst-adsorbate 3D interactions for relaxed
energy predictions [98.70797778496366]
We investigate whether it is possible to predict a system's relaxed energy in the OC20 dataset while ignoring the relative position of the adsorbate.
We find that while removing binding site information impairs accuracy as expected, modified models are able to predict relaxed energies with remarkably decent MAE.
arXiv Detail & Related papers (2023-10-10T14:57:04Z) - Physics-Based Rigid Body Object Tracking and Friction Filtering From RGB-D Videos [8.012771454339353]
We propose a novel approach for real-to-sim which tracks rigid objects in 3D from RGB-D images and infers physical properties of the objects.
We demonstrate and evaluate our approach on a real-world dataset.
arXiv Detail & Related papers (2023-09-27T14:46:01Z) - CRIPP-VQA: Counterfactual Reasoning about Implicit Physical Properties
via Video Question Answering [50.61988087577871]
We introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene.
CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning.
Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties.
arXiv Detail & Related papers (2022-11-07T18:55:26Z) - Visual Looming from Motion Field and Surface Normals [0.0]
Looming, traditionally defined as the relative expansion of objects in the observer's retina, is a fundamental visual cue for perception of threat and can be used to accomplish collision free navigation.
We derive novel solutions for obtaining visual looming quantitatively from the 2D motion field resulting from a six-degree-of-freedom motion of an observer relative to a local surface in 3D.
We present novel methods to estimate visual looming from spatial derivatives of optical flow without the need for knowing range.
arXiv Detail & Related papers (2022-10-08T21:36:49Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z) - Visual Vibration Tomography: Estimating Interior Material Properties
from Monocular Video [66.94502090429806]
An object's interior material properties, while invisible to the human eye, determine motion observed on its surface.
We propose an approach that estimates heterogeneous material properties of an object from a monocular video of its surface vibrations.
arXiv Detail & Related papers (2021-04-06T18:05:27Z) - Learning to Slide Unknown Objects with Differentiable Physics
Simulations [16.86640234046472]
We propose a new technique for pushing an unknown object from an initial configuration to a goal configuration with stability constraints.
The proposed method leverages recent progress in differentiable physics models to learn unknown mechanical properties of pushed objects.
arXiv Detail & Related papers (2020-05-11T21:53:33Z) - Identifying Mechanical Models through Differentiable Simulations [16.86640234046472]
This paper proposes a new method for manipulating unknown objects through a sequence of non-prehensile actions.
The proposed method leverages recent progress in differentiable physics models to identify unknown mechanical properties of manipulated objects.
arXiv Detail & Related papers (2020-05-11T20:19:20Z) - Occlusion resistant learning of intuitive physics from videos [52.25308231683798]
Key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation.
This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences.
arXiv Detail & Related papers (2020-04-30T19:35:54Z) - Visual Grounding of Learned Physical Models [66.04898704928517]
Humans intuitively recognize objects' physical properties and predict their motion, even when the objects are engaged in complicated interactions.
We present a neural model that simultaneously reasons about physics and makes future predictions based on visual and dynamics priors.
Experiments show that our model can infer the physical properties within a few observations, which allows the model to quickly adapt to unseen scenarios and make accurate predictions into the future.
arXiv Detail & Related papers (2020-04-28T17:06:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.