Task-free Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation
- URL: http://arxiv.org/abs/2410.02995v3
- Date: Mon, 03 Feb 2025 12:08:50 GMT
- Title: Task-free Lifelong Robot Learning with Retrieval-based Weighted Local Adaptation
- Authors: Pengzhi Yang, Xinyu Wang, Ruipeng Zhang, Cong Wang, Frans A. Oliehoek, Jens Kober,
- Abstract summary: We store a subset of data from previous tasks and utilize it in two manners: leveraging experience replay to retain learned skills and applying a novel Retrieval-based Local Adaptation technique to restore relevant knowledge.
We also incorporate a selective weighting mechanism to focus on the most "forgotten" skill segment, ensuring effective knowledge restoration.
- Score: 17.215730187681952
- License:
- Abstract: A fundamental objective in intelligent robotics is to move towards lifelong learning robot that can learn and adapt to unseen scenarios over time. However, continually learning new tasks would introduce catastrophic forgetting problems due to data distribution shifts. To mitigate this, we store a subset of data from previous tasks and utilize it in two manners: leveraging experience replay to retain learned skills and applying a novel Retrieval-based Local Adaptation technique to restore relevant knowledge. Since a lifelong learning robot must operate in task-free scenarios, where task IDs and even boundaries are not available, our method performs effectively without relying on such information. We also incorporate a selective weighting mechanism to focus on the most "forgotten" skill segment, ensuring effective knowledge restoration. Experimental results across diverse manipulation tasks demonstrate that our framework provides a scalable paradigm for lifelong learning, enhancing robot performance in open-ended, task-free scenarios.
Related papers
- Robot Fine-Tuning Made Easy: Pre-Training Rewards and Policies for
Autonomous Real-World Reinforcement Learning [58.3994826169858]
We introduce RoboFuME, a reset-free fine-tuning system for robotic reinforcement learning.
Our insights are to utilize offline reinforcement learning techniques to ensure efficient online fine-tuning of a pre-trained policy.
Our method can incorporate data from an existing robot dataset and improve on a target task within as little as 3 hours of autonomous real-world experience.
arXiv Detail & Related papers (2023-10-23T17:50:08Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - Don't Start From Scratch: Leveraging Prior Data to Automate Robotic
Reinforcement Learning [70.70104870417784]
Reinforcement learning (RL) algorithms hold the promise of enabling autonomous skill acquisition for robotic systems.
In practice, real-world robotic RL typically requires time consuming data collection and frequent human intervention to reset the environment.
In this work, we study how these challenges can be tackled by effective utilization of diverse offline datasets collected from previously seen tasks.
arXiv Detail & Related papers (2022-07-11T08:31:22Z) - A Simple Approach to Continual Learning by Transferring Skill Parameters [25.705923249267055]
We show how to continually acquire robotic manipulation skills without forgetting, and using far fewer samples than needed to train them from scratch.
Given an appropriate curriculum, we show how to continually acquire robotic manipulation skills without forgetting, and using far fewer samples than needed to train them from scratch.
arXiv Detail & Related papers (2021-10-19T20:44:20Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.