Enhance Vision-based Tactile Sensors via Dynamic Illumination and Image Fusion
- URL: http://arxiv.org/abs/2504.00017v1
- Date: Thu, 27 Mar 2025 17:19:57 GMT
- Title: Enhance Vision-based Tactile Sensors via Dynamic Illumination and Image Fusion
- Authors: Artemii Redkin, Zdravko Dugonjic, Mike Lambeta, Roberto Calandra,
- Abstract summary: Vision-based tactile sensors use structured light to measure deformation in their elastomeric interface.<n>Until now, vision-based tactile sensors have been using a single, static pattern of structured light tuned to the specific form factor of the sensor.<n>We propose to capture multiple measurements, each with a different illumination pattern, and then fuse them together to obtain a single, higher-quality measurement.
- Score: 4.1392041344598045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision-based tactile sensors use structured light to measure deformation in their elastomeric interface. Until now, vision-based tactile sensors such as DIGIT and GelSight have been using a single, static pattern of structured light tuned to the specific form factor of the sensor. In this work, we investigate the effectiveness of dynamic illumination patterns, in conjunction with image fusion techniques, to improve the quality of sensing of vision-based tactile sensors. Specifically, we propose to capture multiple measurements, each with a different illumination pattern, and then fuse them together to obtain a single, higher-quality measurement. Experimental results demonstrate that this type of dynamic illumination yields significant improvements in image contrast, sharpness, and background difference. This discovery opens the possibility of retroactively improving the sensing quality of existing vision-based tactile sensors with a simple software update, and for new hardware designs capable of fully exploiting dynamic illumination.
Related papers
- A Modularized Design Approach for GelSight Family of Vision-based Tactile Sensors [16.018573469799986]
GelSight family of vision-based tactile sensors has proven to be effective for multiple robot perception and manipulation tasks.
In this paper, we formulate the GelSight sensor design process as a systematic and objective-driven design problem.
We implement the method with an interactive and easy-to-use toolbox called OptiSense Studio.
arXiv Detail & Related papers (2025-04-20T21:07:41Z) - Sensor-Invariant Tactile Representation [11.153753622913843]
High-resolution tactile sensors have become critical for embodied perception and robotic manipulation.
A key challenge in the field is the lack of transferability between sensors due to design and manufacturing variations.
We introduce a novel method for extracting Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer across optical tactile sensors.
arXiv Detail & Related papers (2025-02-27T00:12:50Z) - AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors [11.506370451126378]
Visuo-tactile sensors aim to emulate human tactile perception, enabling robots to understand and manipulate objects.<n>We introduce TacQuad, an aligned multi-modal tactile multi-sensor dataset from four different visuo-tactile sensors.<n>We propose AnyTouch, a unified static-dynamic multi-sensor representation learning framework with a multi-level structure.
arXiv Detail & Related papers (2025-02-15T08:33:25Z) - MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints [26.118805500471066]
Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.
This paper focuses on learning to synthesize the mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors.
A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning.
arXiv Detail & Related papers (2022-03-29T00:24:10Z) - Controllable Data Augmentation Through Deep Relighting [75.96144853354362]
We explore how to augment a varied set of image datasets through relighting so as to improve the ability of existing models to be invariant to illumination changes.
We develop a tool, based on an encoder-decoder network, that is able to quickly generate multiple variations of the illumination of various input scenes.
We demonstrate that by training models on datasets that have been augmented with our pipeline, it is possible to achieve higher performance on localization benchmarks.
arXiv Detail & Related papers (2021-10-26T20:02:51Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.