MimicTouch: Learning Human's Control Strategy with Multi-Modal Tactile
Feedback
- URL: http://arxiv.org/abs/2310.16917v2
- Date: Wed, 1 Nov 2023 22:42:20 GMT
- Title: MimicTouch: Learning Human's Control Strategy with Multi-Modal Tactile
Feedback
- Authors: Kelin Yu, Yunhai Han, Matthew Zhu, Ye Zhao
- Abstract summary: "MimicTouch" is a novel framework that mimics human's tactile-guided control strategy.
We employ online residual reinforcement learning on the physical robot.
This work will pave the way for a broader spectrum of tactile-guided robotic applications.
- Score: 2.8582031759986775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In robotics and artificial intelligence, the integration of tactile
processing is becoming increasingly pivotal, especially in learning to execute
intricate tasks like alignment and insertion. However, existing works focusing
on tactile methods for insertion tasks predominantly rely on robot
teleoperation data and reinforcement learning, which do not utilize the rich
insights provided by human's control strategy guided by tactile feedback. For
utilizing human sensations, methodologies related to learning from humans
predominantly leverage visual feedback, often overlooking the invaluable
tactile feedback that humans inherently employ to finish complex manipulations.
Addressing this gap, we introduce "MimicTouch", a novel framework that mimics
human's tactile-guided control strategy. In this framework, we initially
collect multi-modal tactile datasets from human demonstrators, incorporating
human tactile-guided control strategies for task completion. The subsequent
step involves instructing robots through imitation learning using multi-modal
sensor data and retargeted human motions. To further mitigate the embodiment
gap between humans and robots, we employ online residual reinforcement learning
on the physical robot. Through comprehensive experiments, we validate the
safety of MimicTouch in transferring a latent policy learned through imitation
learning from human to robot. This ongoing work will pave the way for a broader
spectrum of tactile-guided robotic applications.
Related papers
- Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper [7.618517580705364]
We present a portable, lightweight gripper with integrated tactile sensors.<n>We propose a cross-modal representation learning framework that integrates visual and tactile signals.<n>We validate our approach on fine-grained tasks such as test tube insertion and pipette-based fluid transfer.
arXiv Detail & Related papers (2025-07-20T17:53:59Z) - Feel the Force: Contact-Driven Learning from Humans [52.36160086934298]
Controlling fine-grained forces during manipulation remains a core challenge in robotics.<n>We present FeelTheForce, a robot learning system that models human tactile behavior to learn force-sensitive manipulation.<n>Our approach grounds robust low-level force control in scalable human supervision, achieving a 77% success rate across 5 force-sensitive manipulation tasks.
arXiv Detail & Related papers (2025-06-02T17:57:52Z) - PolyTouch: A Robust Multi-Modal Tactile Sensor for Contact-rich Manipulation Using Tactile-Diffusion Policies [4.6090500060386805]
PolyTouch is a novel robot finger that integrates camera-based tactile sensing, acoustic sensing, and peripheral visual sensing into a single design.
Experiments demonstrate a 20-fold increase in lifespan over commercial tactile sensors, with a design that is both easy to manufacture and scalable.
arXiv Detail & Related papers (2025-04-27T19:50:31Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data [28.36623343236893]
We introduce ManiWAV: an 'ear-in-hand' data collection device to collect in-the-wild human demonstrations with synchronous audio and visual feedback.
We show that our system can generalize to unseen in-the-wild environments by learning from diverse in-the-wild human demonstrations.
arXiv Detail & Related papers (2024-06-27T18:06:38Z) - Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System [5.497832119577795]
dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics.
Compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors.
Learning from Demonstrations offers an intuitive alternative, allowing robots to learn manipulations through observed actions.
arXiv Detail & Related papers (2024-06-21T09:03:37Z) - Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - DexTouch: Learning to Seek and Manipulate Objects with Tactile Dexterity [11.450027373581019]
We introduce a multi-finger robot system designed to manipulate objects using the sense of touch, without relying on vision.
For tasks that mimic daily life, the robot uses its sense of touch to manipulate randomly placed objects in dark.
arXiv Detail & Related papers (2024-01-23T05:37:32Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - TANDEM: Learning Joint Exploration and Decision Making with Tactile
Sensors [15.418884994244996]
We focus on the process of guiding tactile exploration, and its interplay with task-related decision making.
We propose TANDEM, an architecture to learn efficient exploration strategies in conjunction with decision making.
We demonstrate this method on a tactile object recognition task, where a robot equipped with a touch sensor must explore and identify an object from a known set based on tactile feedback alone.
arXiv Detail & Related papers (2022-03-01T23:55:09Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.