GelFlow: Self-supervised Learning of Optical Flow for Vision-Based
Tactile Sensor Displacement Measurement
- URL: http://arxiv.org/abs/2309.06735v1
- Date: Wed, 13 Sep 2023 05:48:35 GMT
- Title: GelFlow: Self-supervised Learning of Optical Flow for Vision-Based
Tactile Sensor Displacement Measurement
- Authors: Zhiyuan Zhang, Hua Yang, and Zhouping Yin
- Abstract summary: This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors.
We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods.
- Score: 23.63445828014235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-resolution multi-modality information acquired by vision-based tactile
sensors can support more dexterous manipulations for robot fingers. Optical
flow is low-level information directly obtained by vision-based tactile
sensors, which can be transformed into other modalities like force, geometry
and depth. Current vision-tactile sensors employ optical flow methods from
OpenCV to estimate the deformation of markers in gels. However, these methods
need to be more precise for accurately measuring the displacement of markers
during large elastic deformation of the gel, as this can significantly impact
the accuracy of downstream tasks. This study proposes a self-supervised optical
flow method based on deep learning to achieve high accuracy in displacement
measurement for vision-based tactile sensors. The proposed method employs a
coarse-to-fine strategy to handle large deformations by constructing a
multi-scale feature pyramid from the input image. To better deal with the
elastic deformation caused by the gel, the Helmholtz velocity decomposition
constraint combined with the elastic deformation constraint are adopted to
address the distortion rate and area change rate, respectively. A local flow
fusion module is designed to smooth the optical flow, taking into account the
prior knowledge of the blurred effect of gel deformation. We trained the
proposed self-supervised network using an open-source dataset and compared it
with traditional and deep learning-based optical flow methods. The results show
that the proposed method achieved the highest displacement measurement
accuracy, thereby demonstrating its potential for enabling more precise
measurement of downstream tasks using vision-based tactile sensors.
Related papers
- Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Skin the sheep not only once: Reusing Various Depth Datasets to Drive
the Learning of Optical Flow [25.23550076996421]
We propose to leverage the geometric connection between optical flow estimation and stereo matching.
We turn the monocular depth datasets into stereo ones via virtual disparity.
We also introduce virtual camera motion into stereo data to produce additional flows along the vertical direction.
arXiv Detail & Related papers (2023-10-03T06:56:07Z) - The secret role of undesired physical effects in accurate shape sensing
with eccentric FBGs [1.0805335573008565]
Eccentric fiber Bragg gratings (FBG) are cheap and easy-to-fabricate shape sensors that are often interrogated with simple setups.
Here, we present a novel technique to overcome these limitations and provide accurate and precise shape estimation.
arXiv Detail & Related papers (2022-10-28T09:07:08Z) - Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular
Depth Estimation by Integrating IMU Motion Dynamics [74.1720528573331]
Unsupervised monocular depth and ego-motion estimation has drawn extensive research attention in recent years.
We propose DynaDepth, a novel scale-aware framework that integrates information from vision and IMU motion dynamics.
We validate the effectiveness of DynaDepth by conducting extensive experiments and simulations on the KITTI and Make3D datasets.
arXiv Detail & Related papers (2022-07-11T07:50:22Z) - Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints [26.118805500471066]
Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.
This paper focuses on learning to synthesize the mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors.
A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning.
arXiv Detail & Related papers (2022-03-29T00:24:10Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Movement Tracking by Optical Flow Assisted Inertial Navigation [18.67291804847956]
We show how a learning-based optical flow model can be combined with conventional inertial navigation.
We show how ideas from probabilistic deep learning can aid the robustness of the measurement updates.
The practical applicability is demonstrated on real-world data acquired by an iPad.
arXiv Detail & Related papers (2020-06-24T16:36:13Z) - Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level
Optimization [59.9673626329892]
We exploit the global relationship between optical flow and camera motion using epipolar geometry.
We use implicit differentiation to enable back-propagation through the lower-level geometric optimization layer independent of its implementation.
arXiv Detail & Related papers (2020-02-26T22:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.