HACTS: a Human-As-Copilot Teleoperation System for Robot Learning
- URL: http://arxiv.org/abs/2503.24070v1
- Date: Mon, 31 Mar 2025 13:28:13 GMT
- Title: HACTS: a Human-As-Copilot Teleoperation System for Robot Learning
- Authors: Zhiyuan Xu, Yinuo Zhao, Kun Wu, Ning Liu, Junjie Ji, Zhengping Che, Chi Harold Liu, Jian Tang,
- Abstract summary: We introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware.<n>This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning.
- Score: 47.9126187195398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Teleoperation is essential for autonomous robot learning, especially in manipulation tasks that require human demonstrations or corrections. However, most existing systems only offer unilateral robot control and lack the ability to synchronize the robot's status with the teleoperation hardware, preventing real-time, flexible intervention. In this work, we introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware. This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning. Implemented using 3D-printed components and low-cost, off-the-shelf motors, HACTS is both accessible and scalable. Our experiments show that HACTS significantly enhances performance in imitation learning (IL) and reinforcement learning (RL) tasks, boosting IL recovery capabilities and data efficiency, and facilitating human-in-the-loop RL. HACTS paves the way for more effective and interactive human-robot collaboration and data-collection, advancing the capabilities of robot manipulation.
Related papers
- Teaching Robots to Handle Nuclear Waste: A Teleoperation-Based Learning Approach< [8.587182001055448]
The proposed framework addresses challenges in nuclear waste handling tasks, which often involve repetitive and meticulous manipulation operations.
By capturing operator movements and manipulation forces during teleoperation, the framework utilizes this data to train machine learning models capable of replicating and generalizing human skills.
arXiv Detail & Related papers (2025-04-02T06:46:29Z) - Force-Based Robotic Imitation Learning: A Two-Phase Approach for Construction Assembly Tasks [2.6092377907704254]
This paper proposes a two-phase system to improve robot learning.
The first phase captures real-time data from operators using a robot arm linked with a virtual simulator via ROS-Sharp.
In the second phase, this feedback is converted into robotic motion instructions, using a generative approach to incorporate force feedback into the learning process.
arXiv Detail & Related papers (2025-01-24T22:01:23Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Real-Time Dynamic Robot-Assisted Hand-Object Interaction via Motion Primitives [45.256762954338704]
We propose an approach to enhancing physical HRI with a focus on dynamic robot-assisted hand-object interaction.
We employ a transformer-based algorithm to perform real-time 3D modeling of human hands from single RGB images.
The robot's action implementation is dynamically fine-tuned using the continuously updated 3D hand models.
arXiv Detail & Related papers (2024-05-29T21:20:16Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.