ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception
- URL: http://arxiv.org/abs/2411.08533v1
- Date: Wed, 13 Nov 2024 11:29:14 GMT
- Title: ACROSS: A Deformation-Based Cross-Modal Representation for Robotic Tactile Perception
- Authors: Wadhah Zai El Amri, Malte Kuhlmann, Nicolás Navarro-Guerrero,
- Abstract summary: ACROSS is a framework for translating data between tactile sensors by exploiting sensor deformation information.
We demonstrate our approach to the most challenging problem of going from a low-dimensional tactile representation to a high-dimensional one.
- Score: 1.5566524830295307
- License:
- Abstract: Tactile perception is essential for human interaction with the environment and is becoming increasingly crucial in robotics. Tactile sensors like the BioTac mimic human fingertips and provide detailed interaction data. Despite its utility in applications like slip detection and object identification, this sensor is now deprecated, making many existing valuable datasets obsolete. However, recreating similar datasets with newer sensor technologies is both tedious and time-consuming. Therefore, it is crucial to adapt these existing datasets for use with new setups and modalities. In response, we introduce ACROSS, a novel framework for translating data between tactile sensors by exploiting sensor deformation information. We demonstrate the approach by translating BioTac signals into the DIGIT sensor. Our framework consists of first converting the input signals into 3D deformation meshes. We then transition from the 3D deformation mesh of one sensor to the mesh of another, and finally convert the generated 3D deformation mesh into the corresponding output space. We demonstrate our approach to the most challenging problem of going from a low-dimensional tactile representation to a high-dimensional one. In particular, we transfer the tactile signals of a BioTac sensor to DIGIT tactile images. Our approach enables the continued use of valuable datasets and the exchange of data between groups with different setups.
Related papers
- Transferring Tactile Data Across Sensors [1.5566524830295307]
This article introduces a novel method for translating data between tactile sensors.
We demonstrate the approach by translating BioTac signals into the DIGIT sensor.
Our framework consists of three steps: first, converting signal data into corresponding 3D deformation meshes; second, translating these 3D deformation meshes from one sensor to another; and third, generating output images.
arXiv Detail & Related papers (2024-10-18T09:15:47Z) - Transferable Tactile Transformers for Representation Learning Across Diverse Sensors and Tasks [6.742250322226066]
T3 is a framework for tactile representation learning that scales across multi-sensors and multi-tasks.
T3 pre-trained with FoTa achieved zero-shot transferability in certain sensor-task pairings.
T3 is also effective as a tactile encoder for long horizon contact-rich manipulation.
arXiv Detail & Related papers (2024-06-19T15:39:27Z) - UniTR: A Unified and Efficient Multi-Modal Transformer for
Bird's-Eye-View Representation [113.35352122662752]
We present an efficient multi-modal backbone for outdoor 3D perception named UniTR.
UniTR processes a variety of modalities with unified modeling and shared parameters.
UniTR is also a fundamentally task-agnostic backbone that naturally supports different 3D perception tasks.
arXiv Detail & Related papers (2023-08-15T12:13:44Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Transformer-Based Sensor Fusion for Autonomous Driving: A Survey [0.0]
Transformers-based detection head and CNN-based feature encoder to extract features from raw sensor-data has emerged as one of the best performing sensor-fusion 3D-detection-framework.
We briefly go through the Vision transformers (ViT) basics, so that readers can easily follow through the paper.
In conclusion we summarize with sensor-fusion trends to follow and provoke future research.
arXiv Detail & Related papers (2023-02-22T16:28:20Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - WaveGlove: Transformer-based hand gesture recognition using multiple
inertial sensors [0.0]
Hand Gesture Recognition (HGR) based on inertial data has grown considerably in recent years.
In this work we explore the benefits of using multiple inertial sensors.
arXiv Detail & Related papers (2021-05-04T20:50:53Z) - SensiX: A Platform for Collaborative Machine Learning on the Edge [69.1412199244903]
We present SensiX, a personal edge platform that stays between sensor data and sensing models.
We demonstrate its efficacy in developing motion and audio-based multi-device sensing systems.
Our evaluation shows that SensiX offers a 7-13% increase in overall accuracy and up to 30% increase across different environment dynamics at the expense of 3mW power overhead.
arXiv Detail & Related papers (2020-12-04T23:06:56Z) - Proximity Sensing: Modeling and Understanding Noisy RSSI-BLE Signals and
Other Mobile Sensor Data for Digital Contact Tracing [12.070047847431884]
Social-distancing via efficient contact tracing has emerged as the primary health strategy to dampen the spread of COVID-19.
We present a novel system to estimate pair-wise individual proximity, via a joint model of Bluetooth Low Energy (BLE) signals with other on-device sensors.
arXiv Detail & Related papers (2020-09-04T03:01:52Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.