Tactile-based Object Retrieval From Granular Media
- URL: http://arxiv.org/abs/2402.04536v2
- Date: Wed, 21 Feb 2024 17:31:22 GMT
- Title: Tactile-based Object Retrieval From Granular Media
- Authors: Jingxi Xu, Yinsen Jia, Dongxiao Yang, Patrick Meng, Xinyue Zhu, Zihan
Guo, Shuran Song, Matei Ciocarlie
- Abstract summary: We introduce GEOTACT, a robotic manipulation method capable of retrieving objects buried in granular media.
We show that our problem formulation leads to the natural emergence of learned pushing behaviors that the manipulator uses to reduce uncertainty.
We also introduce a training curriculum that enables learning these behaviors in simulation, followed by zero-shot transfer to real hardware.
- Score: 17.340244278653785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce GEOTACT, a robotic manipulation method capable of retrieving
objects buried in granular media. This is a challenging task due to the need to
interact with granular media, and doing so based exclusively on tactile
feedback, since a buried object can be completely hidden from vision. Tactile
feedback is in itself challenging in this context, due to ubiquitous contact
with the surrounding media, and the inherent noise level induced by the tactile
readings. To address these challenges, we use a learning method trained
end-to-end with simulated sensor noise. We show that our problem formulation
leads to the natural emergence of learned pushing behaviors that the
manipulator uses to reduce uncertainty and funnel the object to a stable grasp
despite spurious and noisy tactile readings. We also introduce a training
curriculum that enables learning these behaviors in simulation, followed by
zero-shot transfer to real hardware. To the best of our knowledge, GEOTACT is
the first method to reliably retrieve a number of different objects from a
granular environment, doing so on real hardware and with integrated tactile
sensing. Videos and additional information can be found at
https://jxu.ai/geotact.
Related papers
- DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity [12.508332341279177]
We introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch.
To achieve this, binary tactile sensors are implemented on one side of the robot hand to minimize the Sim2Real gap.
We demonstrate that object search and manipulation using tactile sensors is possible even in an environment without vision information.
arXiv Detail & Related papers (2024-01-23T05:37:32Z) - Learning Extrinsic Dexterity with Parameterized Manipulation Primitives [8.7221770019454]
We learn a sequence of actions that utilize the environment to change the object's pose.
Our approach can control the object's state through exploiting interactions between the object, the gripper, and the environment.
We evaluate our approach on picking box-shaped objects of various weight, shape, and friction properties from a constrained table-top workspace.
arXiv Detail & Related papers (2023-10-26T21:28:23Z) - Attention for Robot Touch: Tactile Saliency Prediction for Robust
Sim-to-Real Tactile Control [12.302685367517718]
High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks.
We study a new concept: textittactile saliency for robot touch, inspired by the human touch attention mechanism from neuroscience.
arXiv Detail & Related papers (2023-07-26T21:19:45Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Visual-Tactile Multimodality for Following Deformable Linear Objects
Using Reinforcement Learning [15.758583731036007]
We study the problem of using vision and tactile inputs together to complete the task of following deformable linear objects.
We create a Reinforcement Learning agent using different sensing modalities and investigate how its behaviour can be boosted.
Our experiments show that the use of both vision and tactile inputs, together with proprioception, allows the agent to complete the task in up to 92% of cases.
arXiv Detail & Related papers (2022-03-31T21:59:08Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - TANDEM: Learning Joint Exploration and Decision Making with Tactile
Sensors [15.418884994244996]
We focus on the process of guiding tactile exploration, and its interplay with task-related decision making.
We propose TANDEM, an architecture to learn efficient exploration strategies in conjunction with decision making.
We demonstrate this method on a tactile object recognition task, where a robot equipped with a touch sensor must explore and identify an object from a known set based on tactile feedback alone.
arXiv Detail & Related papers (2022-03-01T23:55:09Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - INVIGORATE: Interactive Visual Grounding and Grasping in Clutter [56.00554240240515]
INVIGORATE is a robot system that interacts with human through natural language and grasps a specified object in clutter.
We train separate neural networks for object detection, for visual grounding, for question generation, and for OBR detection and grasping.
We build a partially observable Markov decision process (POMDP) that integrates the learned neural network modules.
arXiv Detail & Related papers (2021-08-25T07:35:21Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.