GelBelt: A Vision-based Tactile Sensor for Continuous Sensing of Large Surfaces
- URL: http://arxiv.org/abs/2501.06263v1
- Date: Thu, 09 Jan 2025 15:00:03 GMT
- Title: GelBelt: A Vision-based Tactile Sensor for Continuous Sensing of Large Surfaces
- Authors: Mohammad Amin Mirzaee, Hung-Jui Huang, Wenzhen Yuan,
- Abstract summary: We introduce a vision-based tactile sensor designed for continuous surface sensing applications.
Our design uses an elastomeric belt and two wheels to continuously scan the target surface.
Results indicate that the proposed sensor can rapidly scan large-scale surfaces with high accuracy at speeds up to 45 mm/s.
- Score: 7.84516438523304
- License:
- Abstract: Scanning large-scale surfaces is widely demanded in surface reconstruction applications and detecting defects in industries' quality control and maintenance stages. Traditional vision-based tactile sensors have shown promising performance in high-resolution shape reconstruction while suffering limitations such as small sensing areas or susceptibility to damage when slid across surfaces, making them unsuitable for continuous sensing on large surfaces. To address these shortcomings, we introduce a novel vision-based tactile sensor designed for continuous surface sensing applications. Our design uses an elastomeric belt and two wheels to continuously scan the target surface. The proposed sensor showed promising results in both shape reconstruction and surface fusion, indicating its applicability. The dot product of the estimated and reference surface normal map is reported over the sensing area and for different scanning speeds. Results indicate that the proposed sensor can rapidly scan large-scale surfaces with high accuracy at speeds up to 45 mm/s.
Related papers
- Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Vision Guided MIMO Radar Beamforming for Enhanced Vital Signs Detection
in Crowds [26.129503530877006]
We develop a novel dual-sensing system, in which a vision sensor is leveraged to guide digital beamforming in a radar.
The calibrated dual system achieves about two centimeters precision in three-dimensional space within a field of view of $75circ$ by $65circ$ and for a range of two meters.
arXiv Detail & Related papers (2023-06-18T10:09:16Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Deep Surface Reconstruction from Point Clouds with Visibility
Information [66.05024551590812]
We present two simple ways to augment raw point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation.
Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization ability of the networks to unseen shape domains.
arXiv Detail & Related papers (2022-02-03T19:33:47Z) - DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction [0.0]
Vision-based tactile sensors have been widely used as rich tactile feedback has been correlated with increased performance in manipulation tasks.
Existing tactile sensor solutions with high resolution have limitations that include low accuracy, expensive components, or lack of scalability.
This paper proposes an inexpensive, scalable, and compact tactile sensor with high-resolution surface deformation modeling for surface reconstruction of the 3D sensor surface.
arXiv Detail & Related papers (2022-01-04T22:26:14Z) - Tactile Image-to-Image Disentanglement of Contact Geometry from
Motion-Induced Shear [30.404840177562754]
Robotic touch, particularly when using soft optical tactile sensors, suffers from distortion caused by motion-dependent shear.
We propose a supervised convolutional deep neural network model that learns to disentangle, in the latent space, the components of sensor deformations caused by contact geometry from those due to sliding-induced shear.
arXiv Detail & Related papers (2021-09-08T13:03:08Z) - Optical Inspection of the Silicon Micro-strip Sensors for the CBM
Experiment employing Artificial Intelligence [0.0]
In this manuscript, we present the analysis of various sensor surface defects.
Defect detection was done using the application of Convolutional Deep Neural Networks (CDNNs)
Based on the total number of defects found on the sensor's surface, a method for the estimation of sensor's overall quality grade and quality score was proposed.
arXiv Detail & Related papers (2021-07-16T05:48:22Z) - Monocular Depth Estimation for Soft Visuotactile Sensors [24.319343057803973]
We investigate the application of state-of-the-art monocular depth estimation to infer dense internal (tactile) depth maps directly from an internal single small IR imaging sensor.
We show that deep networks typically used for long-range depth estimation (1-100m) can be effectively trained for precise predictions at a much shorter range (1-100mm) inside a mostly textureless deformable fluid-filled sensor.
We propose a simple supervised learning process to train an object-agnostic network requiring less than 10 random poses in contact for less than 10 seconds for a small set of diverse objects.
arXiv Detail & Related papers (2021-01-05T17:51:11Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.