Consolidating Trees of Robotic Plans Generated Using Large Language
Models to Improve Reliability
- URL: http://arxiv.org/abs/2401.07868v1
- Date: Mon, 15 Jan 2024 18:01:59 GMT
- Title: Consolidating Trees of Robotic Plans Generated Using Large Language
Models to Improve Reliability
- Authors: Md Sadman Sakib and Yu Sun
- Abstract summary: The inherent probabilistic nature of Large Language Models (LLMs) introduces an element of unpredictability.
This paper introduces an innovative approach aims to generate correct and optimal robotic task plans for diverse real-world demands and scenarios.
- Score: 6.4111574364474215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The inherent probabilistic nature of Large Language Models (LLMs) introduces
an element of unpredictability, raising concerns about potential discrepancies
in their output. This paper introduces an innovative approach aims to generate
correct and optimal robotic task plans for diverse real-world demands and
scenarios. LLMs have been used to generate task plans, but they are unreliable
and may contain wrong, questionable, or high-cost steps. The proposed approach
uses LLM to generate a number of task plans as trees and amalgamates them into
a graph by removing questionable paths. Then an optimal task tree can be
retrieved to circumvent questionable and high-cost nodes, thereby improving
planning accuracy and execution efficiency. The approach is further improved by
incorporating a large knowledge network. Leveraging GPT-4 further, the
high-level task plan is converted into a low-level Planning Domain Definition
Language (PDDL) plan executable by a robot. Evaluation results highlight the
superior accuracy and efficiency of our approach compared to previous
methodologies in the field of task planning.
Related papers
- Exploring and Benchmarking the Planning Capabilities of Large Language Models [57.23454975238014]
We construct a benchmark suite encompassing both classical planning domains and natural language scenarios.
Second, we investigate the use of in-context learning (ICL) to enhance LLM planning, exploring the direct relationship between increased context length and improved planning performance.
Third, we demonstrate the positive impact of fine-tuning LLMs on optimal planning paths, as well as the effectiveness of incorporating model-driven search procedures.
arXiv Detail & Related papers (2024-06-18T22:57:06Z) - DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models [5.385540718118656]
Recent advancements in Large Language Models (LLMs) have sparked a revolution across various research fields.
The integration of common-sense knowledge from LLMs into robot task and motion planning has been proven to be a game-changer.
However, managing the vast knowledge encapsulated within these large models has posed challenges.
We propose a novel LLM-driven task planning approach called DELTA to overcome these challenges.
arXiv Detail & Related papers (2024-04-04T07:59:24Z) - Safe Task Planning for Language-Instructed Multi-Robot Systems using Conformal Prediction [11.614036749291216]
We introduce a new decentralized multi-robot planner called S-ATLAS for Safe plAnning for Teams of Language-instructed AgentS.
We show that the proposed planner can achieve user-specified task success rates while minimizing the overall number of help requests.
arXiv Detail & Related papers (2024-02-23T15:02:44Z) - Learning adaptive planning representations with natural language
guidance [90.24449752926866]
This paper describes Ada, a framework for automatically constructing task-specific planning representations.
Ada interactively learns a library of planner-compatible high-level action abstractions and low-level controllers adapted to a particular domain of planning tasks.
arXiv Detail & Related papers (2023-12-13T23:35:31Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Tree-Planner: Efficient Close-loop Task Planning with Large Language Models [63.06270302774049]
Tree-Planner reframes task planning with Large Language Models into three distinct phases.
Tree-Planner achieves state-of-the-art performance while maintaining high efficiency.
arXiv Detail & Related papers (2023-10-12T17:59:50Z) - Embodied Task Planning with Large Language Models [86.63533340293361]
We propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning with physical scene constraint.
During inference, we discover the objects in the scene by extending open-vocabulary object detectors to multi-view RGB images collected in different achievable locations.
Experimental results show that the generated plan from our TaPA framework can achieve higher success rate than LLaVA and GPT-3.5 by a sizable margin.
arXiv Detail & Related papers (2023-07-04T17:58:25Z) - AdaPlanner: Adaptive Planning from Feedback with Language Models [56.367020818139665]
Large language models (LLMs) have recently demonstrated the potential in acting as autonomous agents for sequential decision-making tasks.
We propose a closed-loop approach, AdaPlanner, which allows the LLM agent to refine its self-generated plan adaptively in response to environmental feedback.
To mitigate hallucination, we develop a code-style LLM prompt structure that facilitates plan generation across a variety of tasks, environments, and agent capabilities.
arXiv Detail & Related papers (2023-05-26T05:52:27Z) - Understanding the Capabilities of Large Language Models for Automated
Planning [24.37599752610625]
The study seeks to shed light on the capabilities of LLMs in solving complex planning problems.
It provides insights into the most effective approaches for using LLMs in this context.
arXiv Detail & Related papers (2023-05-25T15:21:09Z) - Learning to Reason over Scene Graphs: A Case Study of Finetuning GPT-2
into a Robot Language Model for Grounded Task Planning [45.51792981370957]
We investigate the applicability of a smaller class of large language models (LLMs) in robotic task planning by learning to decompose tasks into subgoal specifications for a planner to execute sequentially.
Our method grounds the input of the LLM on the domain that is represented as a scene graph, enabling it to translate human requests into executable robot plans.
Our findings suggest that the knowledge stored in an LLM can be effectively grounded to perform long-horizon task planning, demonstrating the promising potential for the future application of neuro-symbolic planning methods in robotics.
arXiv Detail & Related papers (2023-05-12T18:14:32Z) - A Framework for Neurosymbolic Robot Action Planning using Large Language Models [3.0501524254444767]
We present a framework aimed at bridging the gap between symbolic task planning and machine learning approaches.
The rationale is training Large Language Models (LLMs) into a neurosymbolic task planner compatible with the Planning Domain Definition Language (PDDL)
Preliminary results in selected domains show that our method can: (i) solve 95.5% of problems in a test data set of 1,000 samples; (ii) produce plans up to 13.5% shorter than a traditional symbolic planner; (iii) reduce average overall waiting times for a plan availability by up to 61.4%.
arXiv Detail & Related papers (2023-03-01T11:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.