A soft thumb-sized vision-based sensor with accurate all-round force
perception
- URL: http://arxiv.org/abs/2111.05934v1
- Date: Wed, 10 Nov 2021 20:46:23 GMT
- Title: A soft thumb-sized vision-based sensor with accurate all-round force
perception
- Authors: Huanbo Sun, Katherine J. Kuchenbecker, Georg Martius
- Abstract summary: Vision-based haptic sensors have emerged as a promising approach to robotic touch due to affordable high-resolution cameras and successful computer-vision techniques.
We present a robust, soft, low-cost, vision-based, thumb-sized 3D haptic sensor named Insight.
- Score: 19.905154050561013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based haptic sensors have emerged as a promising approach to robotic
touch due to affordable high-resolution cameras and successful computer-vision
techniques. However, their physical design and the information they provide do
not yet meet the requirements of real applications. We present a robust, soft,
low-cost, vision-based, thumb-sized 3D haptic sensor named Insight: it
continually provides a directional force-distribution map over its entire
conical sensing surface. Constructed around an internal monocular camera, the
sensor has only a single layer of elastomer over-molded on a stiff frame to
guarantee sensitivity, robustness, and soft contact. Furthermore, Insight is
the first system to combine photometric stereo and structured light using a
collimator to detect the 3D deformation of its easily replaceable flexible
outer shell. The force information is inferred by a deep neural network that
maps images to the spatial distribution of 3D contact force (normal and shear).
Insight has an overall spatial resolution of 0.4 mm, force magnitude accuracy
around 0.03 N, and force direction accuracy around 5 degrees over a range of
0.03--2 N for numerous distinct contacts with varying contact area. The
presented hardware and software design concepts can be transferred to a wide
variety of robot parts.
Related papers
- Digitizing Touch with an Artificial Multimodal Fingertip [51.7029315337739]
Humans and robots both benefit from using touch to perceive and interact with the surrounding environment.
Here, we describe several conceptual and technological innovations to improve the digitization of touch.
These advances are embodied in an artificial finger-shaped sensor with advanced sensing capabilities.
arXiv Detail & Related papers (2024-11-04T18:38:50Z) - FeelAnyForce: Estimating Contact Force Feedback from Tactile Sensation for Vision-Based Tactile Sensors [18.88211706267447]
We tackle the problem of estimating 3D contact forces using vision-based tactile sensors.
Our goal is to estimate contact forces over a large range (up to 15 N) on any objects while generalizing across different vision-based tactile sensors.
arXiv Detail & Related papers (2024-10-02T21:28:19Z) - Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data [68.18735997052265]
We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection.
Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor.
The accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods.
arXiv Detail & Related papers (2024-04-10T03:54:53Z) - DensePose From WiFi [86.61881052177228]
We develop a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions.
Our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches.
arXiv Detail & Related papers (2022-12-31T16:48:43Z) - Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints [26.118805500471066]
Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.
This paper focuses on learning to synthesize the mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors.
A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning.
arXiv Detail & Related papers (2022-03-29T00:24:10Z) - DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction [0.0]
Vision-based tactile sensors have been widely used as rich tactile feedback has been correlated with increased performance in manipulation tasks.
Existing tactile sensor solutions with high resolution have limitations that include low accuracy, expensive components, or lack of scalability.
This paper proposes an inexpensive, scalable, and compact tactile sensor with high-resolution surface deformation modeling for surface reconstruction of the 3D sensor surface.
arXiv Detail & Related papers (2022-01-04T22:26:14Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Active 3D Shape Reconstruction from Vision and Touch [66.08432412497443]
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.
In 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings.
We introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile priors to guide the shape exploration; and 3) a set of data-driven solutions with either tactile or visuo
arXiv Detail & Related papers (2021-07-20T15:56:52Z) - GelSight Wedge: Measuring High-Resolution 3D Contact Geometry with a
Compact Robot Finger [8.047951969722794]
GelSight Wedge sensor is optimized to have a compact shape for robot fingers, while achieving high-resolution 3D reconstruction.
We show the effectiveness and potential of the reconstructed 3D geometry for pose tracking in the 3D space.
arXiv Detail & Related papers (2021-06-16T15:15:29Z) - Monocular Depth Estimation for Soft Visuotactile Sensors [24.319343057803973]
We investigate the application of state-of-the-art monocular depth estimation to infer dense internal (tactile) depth maps directly from an internal single small IR imaging sensor.
We show that deep networks typically used for long-range depth estimation (1-100m) can be effectively trained for precise predictions at a much shorter range (1-100mm) inside a mostly textureless deformable fluid-filled sensor.
We propose a simple supervised learning process to train an object-agnostic network requiring less than 10 random poses in contact for less than 10 seconds for a small set of diverse objects.
arXiv Detail & Related papers (2021-01-05T17:51:11Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.