Force-Based Robotic Imitation Learning: A Two-Phase Approach for Construction Assembly Tasks
- URL: http://arxiv.org/abs/2501.14942v1
- Date: Fri, 24 Jan 2025 22:01:23 GMT
- Title: Force-Based Robotic Imitation Learning: A Two-Phase Approach for Construction Assembly Tasks
- Authors: Hengxu You, Yang Ye, Tianyu Zhou, Jing Du,
- Abstract summary: This paper proposes a two-phase system to improve robot learning.<n>The first phase captures real-time data from operators using a robot arm linked with a virtual simulator via ROS-Sharp.<n>In the second phase, this feedback is converted into robotic motion instructions, using a generative approach to incorporate force feedback into the learning process.
- Score: 2.6092377907704254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The drive for efficiency and safety in construction has boosted the role of robotics and automation. However, complex tasks like welding and pipe insertion pose challenges due to their need for precise adaptive force control, which complicates robotic training. This paper proposes a two-phase system to improve robot learning, integrating human-derived force feedback. The first phase captures real-time data from operators using a robot arm linked with a virtual simulator via ROS-Sharp. In the second phase, this feedback is converted into robotic motion instructions, using a generative approach to incorporate force feedback into the learning process. This method's effectiveness is demonstrated through improved task completion times and success rates. The framework simulates realistic force-based interactions, enhancing the training data's quality for precise robotic manipulation in construction tasks.
Related papers
- Teaching Robots to Handle Nuclear Waste: A Teleoperation-Based Learning Approach< [8.587182001055448]
The proposed framework addresses challenges in nuclear waste handling tasks, which often involve repetitive and meticulous manipulation operations.
By capturing operator movements and manipulation forces during teleoperation, the framework utilizes this data to train machine learning models capable of replicating and generalizing human skills.
arXiv Detail & Related papers (2025-04-02T06:46:29Z) - HACTS: a Human-As-Copilot Teleoperation System for Robot Learning [47.9126187195398]
We introduce HACTS (Human-As-Copilot Teleoperation System), a novel system that establishes bilateral, real-time joint synchronization between a robot arm and teleoperation hardware.
This simple yet effective feedback mechanism, akin to a steering wheel in autonomous vehicles, enables the human copilot to intervene seamlessly while collecting action-correction data for future learning.
arXiv Detail & Related papers (2025-03-31T13:28:13Z) - Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition [48.65867987106428]
We introduce a novel system for joint learning between human operators and robots.
It enables human operators to share control of a robot end-effector with a learned assistive agent.
It reduces the need for human adaptation while ensuring the collected data is of sufficient quality for downstream tasks.
arXiv Detail & Related papers (2024-06-29T03:37:29Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning [24.223788665601678]
Two xArm6 robots solve the U-shape assembly task with a success rate of above90% in simulation, and 50% on real hardware without any additional real-world fine-tuning.
Our results present a significant step forward for bi-arm capability on real hardware, and we hope our system can inspire future research on deep RL and Sim2Real transfer bi-manualpolicies.
arXiv Detail & Related papers (2023-03-27T01:25:24Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Leveraging Sequentiality in Reinforcement Learning from a Single
Demonstration [68.94506047556412]
We propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration.
We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up.
arXiv Detail & Related papers (2022-11-09T10:28:40Z) - Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
Learning [121.9708998627352]
Recent work has shown that, in practical robot learning applications, the effects of adversarial training do not pose a fair trade-off.
This work revisits the robustness-accuracy trade-off in robot learning by analyzing if recent advances in robust training methods and theory can make adversarial training suitable for real-world robot applications.
arXiv Detail & Related papers (2022-04-15T08:12:15Z) - Training Robots without Robots: Deep Imitation Learning for
Master-to-Robot Policy Transfer [4.318590074766604]
Deep imitation learning is promising for robot manipulation because it only requires demonstration samples.
Existing demonstration methods have deficiencies; bilateral teleoperation requires a complex control scheme and is expensive.
This research proposes a new master-to-robot (M2R) policy transfer system that does not require robots for teaching force feedback-based manipulation tasks.
arXiv Detail & Related papers (2022-02-19T10:55:10Z) - In-air Knotting of Rope using Dual-Arm Robot based on Deep Learning [8.365690203298966]
We report the successful execution of in-air knotting of rope using a dual-arm two-finger robot based on deep learning.
A manual description of appropriate robot motions corresponding to all object states is difficult to be prepared in advance.
We constructed a model that instructed the robot to perform bowknots and overhand knots based on two deep neural networks trained using the data gathered from its sensorimotor.
arXiv Detail & Related papers (2021-03-17T02:11:58Z) - Reinforcement Learning Experiments and Benchmark for Solving Robotic
Reaching Tasks [0.0]
Reinforcement learning has been successfully applied to solving the reaching task with robotic arms.
It is shown that augmenting the reward signal with the Hindsight Experience Replay exploration technique increases the average return of off-policy agents.
arXiv Detail & Related papers (2020-11-11T14:00:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.