Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System
- URL: http://arxiv.org/abs/2406.14990v2
- Date: Thu, 26 Sep 2024 05:51:20 GMT
- Title: Learning Variable Compliance Control From a Few Demonstrations for Bimanual Robot with Haptic Feedback Teleoperation System
- Authors: Tatsuya Kamijo, Cristian C. Beltran-Hernandez, Masashi Hamaya,
- Abstract summary: dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics.
Compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors.
Learning from Demonstrations offers an intuitive alternative, allowing robots to learn manipulations through observed actions.
- Score: 5.497832119577795
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automating dexterous, contact-rich manipulation tasks using rigid robots is a significant challenge in robotics. Rigid robots, defined by their actuation through position commands, face issues of excessive contact forces due to their inability to adapt to contact with the environment, potentially causing damage. While compliance control schemes have been introduced to mitigate these issues by controlling forces via external sensors, they are hampered by the need for fine-tuning task-specific controller parameters. Learning from Demonstrations (LfD) offers an intuitive alternative, allowing robots to learn manipulations through observed actions. In this work, we introduce a novel system to enhance the teaching of dexterous, contact-rich manipulations to rigid robots. Our system is twofold: firstly, it incorporates a teleoperation interface utilizing Virtual Reality (VR) controllers, designed to provide an intuitive and cost-effective method for task demonstration with haptic feedback. Secondly, we present Comp-ACT (Compliance Control via Action Chunking with Transformers), a method that leverages the demonstrations to learn variable compliance control from a few demonstrations. Our methods have been validated across various complex contact-rich manipulation tasks using single-arm and bimanual robot setups in simulated and real-world environments, demonstrating the effectiveness of our system in teaching robots dexterous manipulations with enhanced adaptability and safety. Code available at: https://github.com/omron-sinicx/CompACT
Related papers
- Zero-Cost Whole-Body Teleoperation for Mobile Manipulation [8.71539730969424]
MoMa-Teleop is a novel teleoperation method that delegates the base motions to a reinforcement learning agent.
We demonstrate that our approach results in a significant reduction in task completion time across a variety of robots and tasks.
arXiv Detail & Related papers (2024-09-23T15:09:45Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Learning Force Control for Legged Manipulation [18.894304288225385]
We propose a method for training RL policies for direct force control without requiring access to force sensing.
We showcase our method on a whole-body control platform of a quadruped robot with an arm.
We provide the first deployment of learned whole-body force control in legged manipulators, paving the way for more versatile and adaptable legged robots.
arXiv Detail & Related papers (2024-05-02T15:53:43Z) - SWBT: Similarity Weighted Behavior Transformer with the Imperfect
Demonstration for Robotic Manipulation [32.78083518963342]
We propose a novel framework named Similarity Weighted Behavior Transformer (SWBT)
SWBT effectively learn from both expert and imperfect demonstrations without interaction with environments.
We are the first to attempt to integrate imperfect demonstrations into the offline imitation learning setting for robot manipulation tasks.
arXiv Detail & Related papers (2024-01-17T04:15:56Z) - A Virtual Reality Teleoperation Interface for Industrial Robot
Manipulators [10.331963200885774]
We address the problem of teleoperating an industrial robot manipulator via a commercially available Virtual Reality interface.
We find that applying standard practices for VR control of robot arms is challenging for industrial platforms.
We propose a simplified filtering approach to process command signals to enable operators to effectively teleoperate industrial robot arms with VR interfaces.
arXiv Detail & Related papers (2023-05-18T13:26:23Z) - Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware [132.39281056124312]
Fine manipulation tasks, such as threading cable ties or slotting a battery, are notoriously difficult for robots.
We present a low-cost system that performs end-to-end imitation learning directly from real demonstrations.
We develop a simple yet novel algorithm, Action Chunking with Transformers, which learns a generative model over action sequences.
arXiv Detail & Related papers (2023-04-23T19:10:53Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Training Robots without Robots: Deep Imitation Learning for
Master-to-Robot Policy Transfer [4.318590074766604]
Deep imitation learning is promising for robot manipulation because it only requires demonstration samples.
Existing demonstration methods have deficiencies; bilateral teleoperation requires a complex control scheme and is expensive.
This research proposes a new master-to-robot (M2R) policy transfer system that does not require robots for teaching force feedback-based manipulation tasks.
arXiv Detail & Related papers (2022-02-19T10:55:10Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z) - Learning Agile Robotic Locomotion Skills by Imitating Animals [72.36395376558984]
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics.
We present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals.
arXiv Detail & Related papers (2020-04-02T02:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.