Sensor-Invariant Tactile Representation
- URL: http://arxiv.org/abs/2502.19638v2
- Date: Thu, 13 Mar 2025 01:45:38 GMT
- Title: Sensor-Invariant Tactile Representation
- Authors: Harsh Gupta, Yuchen Mo, Shengmiao Jin, Wenzhen Yuan,
- Abstract summary: High-resolution tactile sensors have become critical for embodied perception and robotic manipulation.<n>A key challenge in the field is the lack of transferability between sensors due to design and manufacturing variations.<n>We introduce a novel method for extracting Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer across optical tactile sensors.
- Score: 11.153753622913843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-resolution tactile sensors have become critical for embodied perception and robotic manipulation. However, a key challenge in the field is the lack of transferability between sensors due to design and manufacturing variations, which result in significant differences in tactile signals. This limitation hinders the ability to transfer models or knowledge learned from one sensor to another. To address this, we introduce a novel method for extracting Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer across optical tactile sensors. Our approach utilizes a transformer-based architecture trained on a diverse dataset of simulated sensor designs, allowing it to generalize to new sensors in the real world with minimal calibration. Experimental results demonstrate the method's effectiveness across various tactile sensing applications, facilitating data and model transferability for future advancements in the field.
Related papers
- AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors [11.506370451126378]
Visuo-tactile sensors aim to emulate human tactile perception, enabling robots to understand and manipulate objects.<n>We introduce TacQuad, an aligned multi-modal tactile multi-sensor dataset from four different visuo-tactile sensors.<n>We propose AnyTouch, a unified static-dynamic multi-sensor representation learning framework with a multi-level structure.
arXiv Detail & Related papers (2025-02-15T08:33:25Z) - MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception [1.5566524830295307]
ACROSS is a framework for translating data between tactile sensors by exploiting sensor deformation information.<n>We transfer the tactile signals of a BioTac sensor to DIGIT tactile images.
arXiv Detail & Related papers (2024-11-13T11:29:14Z) - Transferring Tactile Data Across Sensors [1.5566524830295307]
This article introduces a novel method for translating data between tactile sensors.
We demonstrate the approach by translating BioTac signals into the DIGIT sensor.
Our framework consists of three steps: first, converting signal data into corresponding 3D deformation meshes; second, translating these 3D deformation meshes from one sensor to another; and third, generating output images.
arXiv Detail & Related papers (2024-10-18T09:15:47Z) - Data-Based Design of Multi-Model Inferential Sensors [0.0]
The nonlinear character of industrial processes is usually the main limitation to designing simple linear inferential sensors.
We propose two novel approaches for the design of multi-model inferential sensors.
The results show substantial improvements over the state-of-the-art design techniques for single-/multi-model inferential sensors.
arXiv Detail & Related papers (2023-08-05T12:55:15Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Real-time detection of uncalibrated sensors using Neural Networks [62.997667081978825]
An online machine-learning based uncalibration detector for temperature, humidity and pressure sensors was developed.
The solution integrates an Artificial Neural Network as main component which learns from the behavior of the sensors under calibrated conditions.
The obtained results show that the proposed solution is able to detect uncalibrations for deviation values of 0.25 degrees, 1% RH and 1.5 Pa, respectively.
arXiv Detail & Related papers (2021-02-02T15:44:39Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.