TwinTac: A Wide-Range, Highly Sensitive Tactile Sensor with Real-to-Sim Digital Twin Sensor Model
- URL: http://arxiv.org/abs/2509.10063v1
- Date: Fri, 12 Sep 2025 08:51:28 GMT
- Title: TwinTac: A Wide-Range, Highly Sensitive Tactile Sensor with Real-to-Sim Digital Twin Sensor Model
- Authors: Xiyan Huang, Zhe Xu, Chenxi Xiao,
- Abstract summary: We present TwinTac, a system that combines the design of a physical tactile sensor with its digital twin model.<n>Our hardware sensor is designed for high sensitivity and a wide measurement range.<n>We show that simulation data generated by our digital twin sensor can effectively augment real-world data, leading to improved accuracy.
- Score: 4.668679765548824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robot skill acquisition processes driven by reinforcement learning often rely on simulations to efficiently generate large-scale interaction data. However, the absence of simulation models for tactile sensors has hindered the use of tactile sensing in such skill learning processes, limiting the development of effective policies driven by tactile perception. To bridge this gap, we present TwinTac, a system that combines the design of a physical tactile sensor with its digital twin model. Our hardware sensor is designed for high sensitivity and a wide measurement range, enabling high quality sensing data essential for object interaction tasks. Building upon the hardware sensor, we develop the digital twin model using a real-to-sim approach. This involves collecting synchronized cross-domain data, including finite element method results and the physical sensor's outputs, and then training neural networks to map simulated data to real sensor responses. Through experimental evaluation, we characterized the sensitivity of the physical sensor and demonstrated the consistency of the digital twin in replicating the physical sensor's output. Furthermore, by conducting an object classification task, we showed that simulation data generated by our digital twin sensor can effectively augment real-world data, leading to improved accuracy. These results highlight TwinTac's potential to bridge the gap in cross-domain learning tasks.
Related papers
- Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation [50.34179054785646]
We present Taccel, a high-performance simulation platform that integrates IPC and ABD to model robots, tactile sensors, and objects with both accuracy and unprecedented speed.<n>Taccel provides precise physics simulation and realistic tactile signals while supporting flexible robot-sensor configurations through user-friendly APIs.<n>These capabilities position Taccel as a powerful tool for scaling up tactile robotics research and development.
arXiv Detail & Related papers (2025-04-17T12:57:11Z) - Tacchi 2.0: A Low Computational Cost and Comprehensive Dynamic Contact Simulator for Vision-based Tactile Sensors [24.17644617805162]
The durability of vision-based tactile sensors significantly increases the cost of tactile information acquisition.<n>We introduce a low computational cost vision-based tactile simulator Tacchi.<n>Tacchi 2.0 can simulate tactile images, marked motion images, and joint images under different motion states like pressing, slipping, and rotating.
arXiv Detail & Related papers (2025-03-12T06:34:12Z) - Sensor-Invariant Tactile Representation [11.153753622913843]
High-resolution tactile sensors have become critical for embodied perception and robotic manipulation.<n>A key challenge in the field is the lack of transferability between sensors due to design and manufacturing variations.<n>We introduce a novel method for extracting Sensor-Invariant Tactile Representations (SITR), enabling zero-shot transfer across optical tactile sensors.
arXiv Detail & Related papers (2025-02-27T00:12:50Z) - AnyTouch: Learning Unified Static-Dynamic Representation across Multiple Visuo-tactile Sensors [11.506370451126378]
Visuo-tactile sensors aim to emulate human tactile perception, enabling robots to understand and manipulate objects.<n>We introduce TacQuad, an aligned multi-modal tactile multi-sensor dataset from four different visuo-tactile sensors.<n>We propose AnyTouch, a unified static-dynamic multi-sensor representation learning framework with a multi-level structure.
arXiv Detail & Related papers (2025-02-15T08:33:25Z) - Data Sensor Fusion In Digital Twin Technology For Enhanced Capabilities In A Home Environment [0.0]
This paper investigates the integration of data sensor fusion in digital twin technology to bolster home environment capabilities.<n>The research integrates Cyber-physical systems, IoT, AI, and robotics to fortify digital twin capabilities.
arXiv Detail & Related papers (2025-02-13T01:14:30Z) - MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - Learning to Detect Slip with Barometric Tactile Sensors and a Temporal
Convolutional Neural Network [7.346580429118843]
We present a learning-based method to detect slip using barometric tactile sensors.
We train a temporal convolution neural network to detect slip, achieving high detection accuracies.
We argue that barometric tactile sensing technology, combined with data-driven learning, is suitable for many manipulation tasks such as slip compensation.
arXiv Detail & Related papers (2022-02-19T08:21:56Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? [57.366931129764815]
We collect more than 9,000 grasping trials using a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger.<n>Our experimental results indicate that incorporating tactile readings substantially improve grasping performance.
arXiv Detail & Related papers (2017-10-16T05:32:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.