LoHoRavens: A Long-Horizon Language-Conditioned Benchmark for Robotic
Tabletop Manipulation
- URL: http://arxiv.org/abs/2310.12020v2
- Date: Mon, 23 Oct 2023 12:29:33 GMT
- Title: LoHoRavens: A Long-Horizon Language-Conditioned Benchmark for Robotic
Tabletop Manipulation
- Authors: Shengqiang Zhang, Philipp Wicke, L\"utfi Kerem \c{S}enel, Luis
Figueredo, Abdeldjallil Naceri, Sami Haddadin, Barbara Plank, Hinrich
Sch\"utze
- Abstract summary: This work focuses on the tabletop manipulation task and releases a simulation benchmark, textitLoHoRavens, which covers various long-horizon reasoning aspects spanning color, size, space, arithmetics and reference.
We investigate two methods of bridging the modality gap: caption generation and learnable interface for incorporating explicit and implicit observation feedback to the LLM.
- Score: 38.66406497318709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The convergence of embodied agents and large language models (LLMs) has
brought significant advancements to embodied instruction following.
Particularly, the strong reasoning capabilities of LLMs make it possible for
robots to perform long-horizon tasks without expensive annotated
demonstrations. However, public benchmarks for testing the long-horizon
reasoning capabilities of language-conditioned robots in various scenarios are
still missing. To fill this gap, this work focuses on the tabletop manipulation
task and releases a simulation benchmark, \textit{LoHoRavens}, which covers
various long-horizon reasoning aspects spanning color, size, space, arithmetics
and reference. Furthermore, there is a key modality bridging problem for
long-horizon manipulation tasks with LLMs: how to incorporate the observation
feedback during robot execution for the LLM's closed-loop planning, which is
however less studied by prior work. We investigate two methods of bridging the
modality gap: caption generation and learnable interface for incorporating
explicit and implicit observation feedback to the LLM, respectively. These
methods serve as the two baselines for our proposed benchmark. Experiments show
that both methods struggle to solve some tasks, indicating long-horizon
manipulation tasks are still challenging for current popular models. We expect
the proposed public benchmark and baselines can help the community develop
better models for long-horizon tabletop manipulation tasks.
Related papers
- Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - From LLMs to Actions: Latent Codes as Bridges in Hierarchical Robot Control [58.72492647570062]
We introduce our method -- Learnable Latent Codes as Bridges (LCB) -- as an alternate architecture to overcome limitations.
We find that methodoutperforms baselines that leverage pure language as the interface layer on tasks that require reasoning and multi-step behaviors.
arXiv Detail & Related papers (2024-05-08T04:14:06Z) - Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks [50.27313829438866]
Plan-Seq-Learn (PSL) is a modular approach that uses motion planning to bridge the gap between abstract language and learned low-level control.
PSL achieves success rates of over 85%, out-performing language-based, classical, and end-to-end approaches.
arXiv Detail & Related papers (2024-05-02T17:59:31Z) - Large Language Models for Orchestrating Bimanual Robots [19.60907949776435]
We present LAnguage-model-based Bimanual ORchestration (LABOR) to analyze task configurations and devise coordination control policies.
We evaluate our method through simulated experiments involving two classes of long-horizon tasks using the NICOL humanoid robot.
arXiv Detail & Related papers (2024-04-02T15:08:35Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Generalizable Long-Horizon Manipulations with Large Language Models [91.740084601715]
This work introduces a framework harnessing the capabilities of Large Language Models (LLMs) to generate primitive task conditions for generalizable long-horizon manipulations.
We create a challenging robotic manipulation task suite based on Pybullet for long-horizon task evaluation.
arXiv Detail & Related papers (2023-10-03T17:59:46Z) - Ground Manipulator Primitive Tasks to Executable Actions using Large
Language Models [13.827349677538352]
We propose a novel approach to ground the manipulator primitive tasks to robot low-level actions using large language models (LLMs)
In this way, we enable LLMs to generate position/force set-points for hybrid control.
arXiv Detail & Related papers (2023-08-13T16:52:36Z) - LEMMA: Learning Language-Conditioned Multi-Robot Manipulation [21.75163634731677]
LanguagE-Conditioned Multi-robot MAnipulation (LEMMA)
LeMMA features 8 types of procedurally generated tasks with varying degree of complexity.
For each task, we provide 800 expert demonstrations and human instructions for training and evaluations.
arXiv Detail & Related papers (2023-08-02T04:37:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.