A Modularized Design Approach for GelSight Family of Vision-based Tactile Sensors
- URL: http://arxiv.org/abs/2504.14739v1
- Date: Sun, 20 Apr 2025 21:07:41 GMT
- Title: A Modularized Design Approach for GelSight Family of Vision-based Tactile Sensors
- Authors: Arpit Agarwal, Mohammad Amin Mirzaee, Xiping Sun, Wenzhen Yuan,
- Abstract summary: GelSight family of vision-based tactile sensors has proven to be effective for multiple robot perception and manipulation tasks.<n>In this paper, we formulate the GelSight sensor design process as a systematic and objective-driven design problem.<n>We implement the method with an interactive and easy-to-use toolbox called OptiSense Studio.
- Score: 16.018573469799986
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: GelSight family of vision-based tactile sensors has proven to be effective for multiple robot perception and manipulation tasks. These sensors are based on an internal optical system and an embedded camera to capture the deformation of the soft sensor surface, inferring the high-resolution geometry of the objects in contact. However, customizing the sensors for different robot hands requires a tedious trial-and-error process to re-design the optical system. In this paper, we formulate the GelSight sensor design process as a systematic and objective-driven design problem and perform the design optimization with a physically accurate optical simulation. The method is based on modularizing and parameterizing the sensor's optical components and designing four generalizable objective functions to evaluate the sensor. We implement the method with an interactive and easy-to-use toolbox called OptiSense Studio. With the toolbox, non-sensor experts can quickly optimize their sensor design in both forward and inverse ways following our predefined modules and steps. We demonstrate our system with four different GelSight sensors by quickly optimizing their initial design in simulation and transferring it to the real sensors.
Related papers
- Enhance Vision-based Tactile Sensors via Dynamic Illumination and Image Fusion [4.1392041344598045]
Vision-based tactile sensors use structured light to measure deformation in their elastomeric interface.<n>Until now, vision-based tactile sensors have been using a single, static pattern of structured light tuned to the specific form factor of the sensor.<n>We propose to capture multiple measurements, each with a different illumination pattern, and then fuse them together to obtain a single, higher-quality measurement.
arXiv Detail & Related papers (2025-03-27T17:19:57Z) - Sensor-Invariant Tactile Representation [11.153753622913843]
High-resolution tactile sensors have become critical for embodied perception and robotic manipulation.<n>A key challenge in the field is the lack of transferability between sensors due to design and manufacturing variations.<n>We introduce a novel method for extracting Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer across optical tactile sensors.
arXiv Detail & Related papers (2025-02-27T00:12:50Z) - MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Angle Sensitive Pixels for Lensless Imaging on Spherical Sensors [22.329417756084094]
OrbCam is a lensless architecture for imaging with spherical sensors.
We show that the diversity of pixel orientations on a curved surface is sufficient to improve the conditioning of the mapping between the scene and the sensor.
arXiv Detail & Related papers (2023-06-28T06:28:53Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction [0.0]
Vision-based tactile sensors have been widely used as rich tactile feedback has been correlated with increased performance in manipulation tasks.
Existing tactile sensor solutions with high resolution have limitations that include low accuracy, expensive components, or lack of scalability.
This paper proposes an inexpensive, scalable, and compact tactile sensor with high-resolution surface deformation modeling for surface reconstruction of the 3D sensor surface.
arXiv Detail & Related papers (2022-01-04T22:26:14Z) - Perception Entropy: A Metric for Multiple Sensors Configuration
Evaluation and Design [17.979248163548288]
A well-designed sensor configuration significantly improves the performance upper bound of the perception system.
We propose a novel method based on conditional entropy in Bayesian theory to evaluate the sensor configurations containing both cameras and LiDARs.
To the best of our knowledge, this is the first method to tackle the multi-sensor configuration problem for autonomous vehicles.
arXiv Detail & Related papers (2021-04-14T03:52:57Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.