LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning
- URL: http://arxiv.org/abs/2306.03310v2
- Date: Sat, 14 Oct 2023 15:52:31 GMT
- Title: LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning
- Authors: Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu,
Peter Stone
- Abstract summary: LIBERO is a novel benchmark of lifelong learning for robot manipulation.
We focus on how to efficiently transfer declarative knowledge, procedural knowledge, or the mixture of both.
We develop an extendible procedural generation pipeline that can in principle generate infinitely many tasks.
- Score: 64.55001982176226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lifelong learning offers a promising paradigm of building a generalist agent
that learns and adapts over its lifespan. Unlike traditional lifelong learning
problems in image and text domains, which primarily involve the transfer of
declarative knowledge of entities and concepts, lifelong learning in
decision-making (LLDM) also necessitates the transfer of procedural knowledge,
such as actions and behaviors. To advance research in LLDM, we introduce
LIBERO, a novel benchmark of lifelong learning for robot manipulation.
Specifically, LIBERO highlights five key research topics in LLDM: 1) how to
efficiently transfer declarative knowledge, procedural knowledge, or the
mixture of both; 2) how to design effective policy architectures and 3)
effective algorithms for LLDM; 4) the robustness of a lifelong learner with
respect to task ordering; and 5) the effect of model pretraining for LLDM. We
develop an extendible procedural generation pipeline that can in principle
generate infinitely many tasks. For benchmarking purpose, we create four task
suites (130 tasks in total) that we use to investigate the above-mentioned
research topics. To support sample-efficient learning, we provide high-quality
human-teleoperated demonstration data for all tasks. Our extensive experiments
present several insightful or even unexpected discoveries: sequential
finetuning outperforms existing lifelong learning methods in forward transfer,
no single visual encoder architecture excels at all types of knowledge
transfer, and naive supervised pretraining can hinder agents' performance in
the subsequent LLDM. Check the website at https://libero-project.github.io for
the code and the datasets.
Related papers
- Online Continual Learning For Interactive Instruction Following Agents [20.100312650193228]
We argue that such a learning scenario is less realistic since a robotic agent is supposed to learn the world continuously as it explores and perceives it.
We propose two continual learning setups for embodied agents; learning new behaviors and new environments.
arXiv Detail & Related papers (2024-03-12T11:33:48Z) - COOLer: Class-Incremental Learning for Appearance-Based Multiple Object
Tracking [32.47215340215641]
This paper extends the scope of continual learning research to class-incremental learning for multiple object tracking (MOT)
Previous solutions for continual learning of object detectors do not address the data association stage of appearance-based trackers.
We introduce COOLer, a COntrastive- and cOntinual-Learning-based tracker, which incrementally learns to track new categories while preserving past knowledge.
arXiv Detail & Related papers (2023-10-04T17:49:48Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Task-Attentive Transformer Architecture for Continual Learning of
Vision-and-Language Tasks Using Knowledge Distillation [18.345183818638475]
Continual learning (CL) can serve as a remedy through enabling knowledge-transfer across sequentially arriving tasks.
We develop a transformer-based CL architecture for learning bimodal vision-and-language tasks.
Our approach is scalable learning to a large number of tasks because it requires little memory and time overhead.
arXiv Detail & Related papers (2023-03-25T10:16:53Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - Lifelong Reinforcement Learning with Temporal Logic Formulas and Reward
Machines [30.161550541362487]
We propose Lifelong reinforcement learning with Sequential linear temporal logic formulas and Reward Machines (LSRM)
We first introduce Sequential Linear Temporal Logic (SLTL), which is a supplement to the existing Linear Temporal Logic formal language.
We then utilize Reward Machines (RM) to exploit structural reward functions for tasks encoded with high-level events.
arXiv Detail & Related papers (2021-11-18T02:02:08Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.