MimicTouch: Learning Human's Control Strategy with Multi-Modal Tactile
Feedback
- URL: http://arxiv.org/abs/2310.16917v2
- Date: Wed, 1 Nov 2023 22:42:20 GMT
- Title: MimicTouch: Learning Human's Control Strategy with Multi-Modal Tactile
Feedback
- Authors: Kelin Yu, Yunhai Han, Matthew Zhu, Ye Zhao
- Abstract summary: "MimicTouch" is a novel framework that mimics human's tactile-guided control strategy.
We employ online residual reinforcement learning on the physical robot.
This work will pave the way for a broader spectrum of tactile-guided robotic applications.
- Score: 2.8582031759986775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In robotics and artificial intelligence, the integration of tactile
processing is becoming increasingly pivotal, especially in learning to execute
intricate tasks like alignment and insertion. However, existing works focusing
on tactile methods for insertion tasks predominantly rely on robot
teleoperation data and reinforcement learning, which do not utilize the rich
insights provided by human's control strategy guided by tactile feedback. For
utilizing human sensations, methodologies related to learning from humans
predominantly leverage visual feedback, often overlooking the invaluable
tactile feedback that humans inherently employ to finish complex manipulations.
Addressing this gap, we introduce "MimicTouch", a novel framework that mimics
human's tactile-guided control strategy. In this framework, we initially
collect multi-modal tactile datasets from human demonstrators, incorporating
human tactile-guided control strategies for task completion. The subsequent
step involves instructing robots through imitation learning using multi-modal
sensor data and retargeted human motions. To further mitigate the embodiment
gap between humans and robots, we employ online residual reinforcement learning
on the physical robot. Through comprehensive experiments, we validate the
safety of MimicTouch in transferring a latent policy learned through imitation
learning from human to robot. This ongoing work will pave the way for a broader
spectrum of tactile-guided robotic applications.
Related papers
- Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System [5.497832119577795]
This work introduces a novel system to enhance the teaching of dexterous, contact-rich manipulations to rigid robots.
It incorporates a teleoperation interface utilizing Virtual Reality (VR) controllers, designed to provide an intuitive and cost-effective method for task demonstration with haptic feedback.
Our methods have been validated across various complex contact-rich manipulation tasks using single-arm and bimanual robot setups in simulated and real-world environments.
arXiv Detail & Related papers (2024-06-21T09:03:37Z) - DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity [12.508332341279177]
We introduce a multi-finger robot system designed to search for and manipulate objects using the sense of touch.
To achieve this, binary tactile sensors are implemented on one side of the robot hand to minimize the Sim2Real gap.
We demonstrate that object search and manipulation using tactile sensors is possible even in an environment without vision information.
arXiv Detail & Related papers (2024-01-23T05:37:32Z) - Robot Synesthesia: In-Hand Manipulation with Visuotactile Sensing [16.570647733532173]
We introduce a system that leverages visual and tactile sensory inputs to enable dexterous in-hand manipulation.
Robot Synesthesia is a novel point cloud-based tactile representation inspired by human tactile-visual synesthesia.
arXiv Detail & Related papers (2023-12-04T12:35:43Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Co-GAIL: Learning Diverse Strategies for Human-Robot Collaboration [51.268988527778276]
We present a method for learning a human-robot collaboration policy from human-human collaboration demonstrations.
Our method co-optimizes a human policy and a robot policy in an interactive learning process.
arXiv Detail & Related papers (2021-08-13T03:14:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.