Teaching Robots to Handle Nuclear Waste: A Teleoperation-Based Learning Approach<
- URL: http://arxiv.org/abs/2504.01405v1
- Date: Wed, 02 Apr 2025 06:46:29 GMT
- Title: Teaching Robots to Handle Nuclear Waste: A Teleoperation-Based Learning Approach<
- Authors: Joong-Ku Lee, Hyeonseok Choi, Young Soo Park, Jee-Hwan Ryu,
- Abstract summary: The proposed framework addresses challenges in nuclear waste handling tasks, which often involve repetitive and meticulous manipulation operations.<n>By capturing operator movements and manipulation forces during teleoperation, the framework utilizes this data to train machine learning models capable of replicating and generalizing human skills.
- Score: 8.587182001055448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a Learning from Teleoperation (LfT) framework that integrates human expertise with robotic precision to enable robots to autonomously perform skills learned from human operators. The proposed framework addresses challenges in nuclear waste handling tasks, which often involve repetitive and meticulous manipulation operations. By capturing operator movements and manipulation forces during teleoperation, the framework utilizes this data to train machine learning models capable of replicating and generalizing human skills. We validate the effectiveness of the LfT framework through its application to a power plug insertion task, selected as a representative scenario that is repetitive yet requires precise trajectory and force control. Experimental results highlight significant improvements in task efficiency, while reducing reliance on continuous operator involvement.
Related papers
- HACTS: a Human-As-Copilot Teleoperation System for Robot Learning [47.9126187195398]
We introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware.<n>This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning.
arXiv Detail & Related papers (2025-03-31T13:28:13Z) - Transferable Latent-to-Latent Locomotion Policy for Efficient and Versatile Motion Control of Diverse Legged Robots [9.837559106057814]
The pretrain-and-finetune paradigm offers a promising approach for efficiently adapting to new robot entities and tasks.<n>We propose a latent training framework where a transferable latent-to-latent locomotion policy is pretrained alongside diverse task-specific observation encoders and action decoders.<n>We validate our approach through extensive simulations and real-world experiments, demonstrating that the pretrained latent-to-latent locomotion policy effectively generalizes to new robot entities and tasks with improved efficiency.
arXiv Detail & Related papers (2025-03-22T03:01:25Z) - Force-Based Robotic Imitation Learning: A Two-Phase Approach for Construction Assembly Tasks [2.6092377907704254]
This paper proposes a two-phase system to improve robot learning.
The first phase captures real-time data from operators using a robot arm linked with a virtual simulator via ROS-Sharp.
In the second phase, this feedback is converted into robotic motion instructions, using a generative approach to incorporate force feedback into the learning process.
arXiv Detail & Related papers (2025-01-24T22:01:23Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Constrained-Space Optimization and Reinforcement Learning for Complex
Tasks [42.648636742651185]
Learning from Demonstration is increasingly used for transferring operator manipulation skills to robots.
This paper presents a constrained-space optimization and reinforcement learning scheme for managing complex tasks.
arXiv Detail & Related papers (2020-04-01T21:50:11Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.