Efficient reinforcement learning control for continuum robots based on
Inexplicit Prior Knowledge
- URL: http://arxiv.org/abs/2002.11573v2
- Date: Fri, 2 Oct 2020 17:02:25 GMT
- Title: Efficient reinforcement learning control for continuum robots based on
Inexplicit Prior Knowledge
- Authors: Junjia Liu, Jiaying Shou, Zhuang Fu, Hangfei Zhou, Rongli Xie, Jun
Zhang, Jian Fei and Yanna Zhao
- Abstract summary: We propose an efficient reinforcement learning method based on inexplicit prior knowledge.
By using our method, we can achieve active visual tracking and distance maintenance of a tendon-driven robot.
- Score: 3.3645162441357437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared to rigid robots that are generally studied in reinforcement
learning, the physical characteristics of some sophisticated robots such as
soft or continuum robots are higher complicated. Moreover, recent reinforcement
learning methods are data-inefficient and can not be directly deployed to the
robot without simulation. In this paper, we propose an efficient reinforcement
learning method based on inexplicit prior knowledge in response to such
problems. We first corroborate the method by simulation and employed directly
in the real world. By using our method, we can achieve active visual tracking
and distance maintenance of a tendon-driven robot which will be critical in
minimally invasive procedures. Codes are available at
https://github.com/Skylark0924/TendonTrack.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Autonomous Robotic Reinforcement Learning with Asynchronous Human
Feedback [27.223725464754853]
GEAR enables robots to be placed in real-world environments and left to train autonomously without interruption.
System streams robot experience to a web interface only requiring occasional asynchronous feedback from remote, crowdsourced, non-expert humans.
arXiv Detail & Related papers (2023-10-31T16:43:56Z) - Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Learning Visual Tracking and Reaching with Deep Reinforcement Learning
on a UR10e Robotic Arm [2.2168889407389445]
Reinforcement learning algorithms provide the potential to enable robots to learn optimal solutions to complete new tasks without reprogramming them.
Current state-of-the-art in reinforcement learning relies on fast simulations and parallelization to achieve optimal performance.
This report outlines our initial research into the application of deep reinforcement learning on an industrial UR10e robot.
arXiv Detail & Related papers (2023-08-28T15:34:43Z) - Evaluating Continual Learning on a Home Robot [30.620205237707342]
We show how continual learning methods can be adapted for use on a real, low-cost home robot.
We propose SANER, a method for continuously learning a library of skills, and ABIP, as the backbone to support it.
arXiv Detail & Related papers (2023-06-04T17:14:49Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - PPMC RL Training Algorithm: Rough Terrain Intelligent Robots through
Reinforcement Learning [4.314956204483074]
This paper introduces a generic training algorithm teaching generalized PPMC in rough environments to any robot.
We show through experiments that the robot learns to generalize to new rough terrain maps, retaining a 100% success rate.
To the best of our knowledge, this is the first paper to introduce a generic training algorithm teaching generalized PPMC in rough environments to any robot.
arXiv Detail & Related papers (2020-03-02T10:14:52Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.