ReSkin: versatile, replaceable, lasting tactile skins
- URL: http://arxiv.org/abs/2111.00071v1
- Date: Fri, 29 Oct 2021 20:21:37 GMT
- Title: ReSkin: versatile, replaceable, lasting tactile skins
- Authors: Raunaq Bhirangi, Tess Hellebrekers, Carmel Majidi and Abhinav Gupta
- Abstract summary: ReSkin is a tactile soft sensor that leverages machine learning and magnetic sensing to offer a low-cost, diverse and compact solution for long-term use.
Our self-supervised learning algorithm enables finer performance enhancement with small, inexpensive data collection procedures.
- Score: 28.348982687106883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Soft sensors have continued growing interest in robotics, due to their
ability to enable both passive conformal contact from the material properties
and active contact data from the sensor properties. However, the same
properties of conformal contact result in faster deterioration of soft sensors
and larger variations in their response characteristics over time and across
samples, inhibiting their ability to be long-lasting and replaceable. ReSkin is
a tactile soft sensor that leverages machine learning and magnetic sensing to
offer a low-cost, diverse and compact solution for long-term use. Magnetic
sensing separates the electronic circuitry from the passive interface, making
it easier to replace interfaces as they wear out while allowing for a wide
variety of form factors. Machine learning allows us to learn sensor response
models that are robust to variations across fabrication and time, and our
self-supervised learning algorithm enables finer performance enhancement with
small, inexpensive data collection procedures. We believe that ReSkin opens the
door to more versatile, scalable and inexpensive tactile sensation modules than
existing alternatives.
Related papers
- Condition-Aware Multimodal Fusion for Robust Semantic Perception of Driving Scenes [56.52618054240197]
We propose a novel, condition-aware multimodal fusion approach for robust semantic perception of driving scenes.
Our method, CAFuser, uses an RGB camera input to classify environmental conditions and generate a Condition Token that guides the fusion of multiple sensor modalities.
We set the new state of the art with CAFuser on the MUSES dataset with 59.7 PQ for multimodal panoptic segmentation and 78.2 mIoU for semantic segmentation, ranking first on the public benchmarks.
arXiv Detail & Related papers (2024-10-14T17:56:20Z) - AnySkin: Plug-and-play Skin Sensing for Robotic Touch [15.126846563910814]
We address the challenges that impede the use of tactile sensing -- versatility, replaceability, and data reusability.
This work makes three key contributions: first, we introduce a streamlined fabrication process for creating an adhesive-free, durable and easily replaceable magnetic tactile sensor; second, we characterize slip detection and policy learning with the AnySkin sensor; and third, we demonstrate zero-shot generalization of models trained on one instance of AnySkin to new instances.
arXiv Detail & Related papers (2024-09-12T17:59:44Z) - A model-free approach to fingertip slip and disturbance detection for
grasp stability inference [0.0]
We propose a method for assessing grasp stability using tactile sensing.
We use highly sensitive Uskin tactile sensors mounted on an Allegro hand to test and validate our method.
arXiv Detail & Related papers (2023-11-22T09:04:26Z) - Recognizing Complex Gestures on Minimalistic Knitted Sensors: Toward
Real-World Interactive Systems [0.13048920509133805]
Our digitally-knitted capacitive active sensors can be manufactured at scale with little human intervention.
This work advances the capabilities of such sensors by creating the foundation for an interactive gesture recognition system.
arXiv Detail & Related papers (2023-03-18T04:57:46Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Learning to Detect Slip with Barometric Tactile Sensors and a Temporal
Convolutional Neural Network [7.346580429118843]
We present a learning-based method to detect slip using barometric tactile sensors.
We train a temporal convolution neural network to detect slip, achieving high detection accuracies.
We argue that barometric tactile sensing technology, combined with data-driven learning, is suitable for many manipulation tasks such as slip compensation.
arXiv Detail & Related papers (2022-02-19T08:21:56Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Under Pressure: Learning to Detect Slip with Barometric Tactile Sensors [7.35805050004643]
We present a learning-based method to detect slip using barometric tactile sensors.
We are able to achieve slip detection accuracies of greater than 91%.
We show that barometric tactile sensing technology, combined with data-driven learning, is potentially suitable for many complex manipulation tasks.
arXiv Detail & Related papers (2021-03-24T19:29:03Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.