Simultaneous Task Allocation and Planning for Multi-Robots under
Hierarchical Temporal Logic Specifications
- URL: http://arxiv.org/abs/2401.04003v2
- Date: Fri, 12 Jan 2024 15:52:29 GMT
- Title: Simultaneous Task Allocation and Planning for Multi-Robots under
Hierarchical Temporal Logic Specifications
- Authors: Xusheng Luo and Changliu Liu
- Abstract summary: We introduce a hierarchical structure to specifications with requirements on syntax and semantics, and prove that they are more expressive than their flat counterparts.
We employ a search-based approach to synthesize plans for a multi-robot system, accomplishing simultaneous task allocation and planning.
- Score: 10.007538582534302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Past research into robotic planning with temporal logic specifications,
notably Linear Temporal Logic (LTL), was largely based on singular formulas for
individual or groups of robots. But with increasing task complexity, LTL
formulas unavoidably grow lengthy, complicating interpretation and
specification generation, and straining the computational capacities of the
planners. By leveraging the intrinsic structure of tasks, we introduced a
hierarchical structure to LTL specifications with requirements on syntax and
semantics, and proved that they are more expressive than their flat
counterparts. Second, we employ a search-based approach to synthesize plans for
a multi-robot system, accomplishing simultaneous task allocation and planning.
The search space is approximated by loosely interconnected sub-spaces, with
each sub-space corresponding to one LTL specification. The search is
predominantly confined to a single sub-space, transitioning to another
sub-space under certain conditions, determined by the decomposition of
automatons. Moreover, multiple heuristics are formulated to expedite the search
significantly. A theoretical analysis concerning completeness and optimality is
conducted under mild assumptions. When compared with existing methods on
service tasks, our method outperforms in terms of execution times with
comparable solution quality. Finally, scalability is evaluated by testing a
group of 30 robots and achieving reasonable runtimes.
Related papers
- AgentTTS: Large Language Model Agent for Test-time Compute-optimal Scaling Strategy in Complex Tasks [33.858780386822836]
Test-time scaling (TTS) enhances the performance of large language models (LLMs) by allocating additional compute resources during inference.<n>We study a novel problem: the test-time compute-optimal scaling in multi-stage complex tasks.<n>We propose AgentTTS, an LLM-agent-based framework that autonomously searches for compute-optimal allocations.
arXiv Detail & Related papers (2025-07-26T19:21:18Z) - EIFBENCH: Extremely Complex Instruction Following Benchmark for Large Language Models [65.48902212293903]
We present the Extremely Complex Instruction Following Benchmark (EIFBENCH) for evaluating large language models (LLMs)<n>EIFBENCH includes multi-task scenarios that enable comprehensive assessment across diverse task types concurrently.<n>We also propose the Segment Policy Optimization (SegPO) algorithm to enhance the LLM's ability to accurately fulfill multi-task workflow.
arXiv Detail & Related papers (2025-06-10T02:39:55Z) - Route-and-Reason: Scaling Large Language Model Reasoning with Reinforced Model Router [9.580226379350737]
Multi-step reasoning has proven essential for enhancing the problem-solving capabilities of Large Language Models.<n>Yet, many reasoning steps are relatively simple and can be handled by more efficient smaller-scale language models.<n>We propose R2-Reasoner, a novel framework that enables collaborative reasoning across heterogeneous LLMs.
arXiv Detail & Related papers (2025-06-06T09:18:56Z) - Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - DeepLTL: Learning to Efficiently Satisfy Complex LTL Specifications for Multi-Task RL [59.01527054553122]
Linear temporal logic (LTL) has recently been adopted as a powerful formalism for specifying complex, temporally extended tasks.
Existing approaches suffer from several shortcomings.
We propose a novel learning approach to address these concerns.
arXiv Detail & Related papers (2024-10-06T21:30:38Z) - ET-Plan-Bench: Embodied Task-level Planning Benchmark Towards Spatial-Temporal Cognition with Foundation Models [39.606908488885125]
ET-Plan-Bench is a benchmark for embodied task planning using Large Language Models (LLMs)
It features a controllable and diverse set of embodied tasks varying in different levels of difficulties and complexities.
Our benchmark distinguishes itself as a large-scale, quantifiable, highly automated, and fine-grained diagnostic framework.
arXiv Detail & Related papers (2024-10-02T19:56:38Z) - Long-horizon Embodied Planning with Implicit Logical Inference and Hallucination Mitigation [7.668848364013772]
We present ReLEP, a novel framework for Real-time Long-horizon Embodied Planning.
ReLEP can complete a wide range of long-horizon tasks without in-context examples by learning implicit logical inference through fine-tuning.
arXiv Detail & Related papers (2024-09-24T01:47:23Z) - Unlocking Reasoning Potential in Large Langauge Models by Scaling Code-form Planning [94.76546523689113]
We introduce CodePlan, a framework that generates and follows textcode-form plans -- pseudocode that outlines high-level, structured reasoning processes.
CodePlan effectively captures the rich semantics and control flows inherent to sophisticated reasoning tasks.
It achieves a 25.1% relative improvement compared with directly generating responses.
arXiv Detail & Related papers (2024-09-19T04:13:58Z) - Scaling Up Natural Language Understanding for Multi-Robots Through the Lens of Hierarchy [8.180994118420053]
Long-horizon planning is hindered by challenges such as uncertainty accumulation, computational complexity, delayed rewards and incomplete information.
This work proposes an approach to exploit the task hierarchy from human instructions to facilitate multi-robot planning.
arXiv Detail & Related papers (2024-08-15T14:46:13Z) - Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More? [54.667202878390526]
Long-context language models (LCLMs) have the potential to revolutionize our approach to tasks traditionally reliant on external tools like retrieval systems or databases.
We introduce LOFT, a benchmark of real-world tasks requiring context up to millions of tokens designed to evaluate LCLMs' performance on in-context retrieval and reasoning.
Our findings reveal LCLMs' surprising ability to rival state-of-the-art retrieval and RAG systems, despite never having been explicitly trained for these tasks.
arXiv Detail & Related papers (2024-06-19T00:28:58Z) - ADaPT: As-Needed Decomposition and Planning with Language Models [131.063805299796]
We introduce As-Needed Decomposition and Planning for complex Tasks (ADaPT)
ADaPT explicitly plans and decomposes complex sub-tasks as-needed, when the Large Language Models is unable to execute them.
Our results demonstrate that ADaPT substantially outperforms established strong baselines.
arXiv Detail & Related papers (2023-11-08T17:59:15Z) - Decomposition-based Hierarchical Task Allocation and Planning for Multi-Robots under Hierarchical Temporal Logic Specifications [9.150196865878234]
We formulate a decomposition-based hierarchical framework for robotic planning with temporal logic specifications.
A Mixed Linear Program is used to assign sub-tasks to various robots.
Our approach was experimentally applied to domains of navigation and manipulation.
arXiv Detail & Related papers (2023-08-20T23:53:13Z) - Robot Task Planning Based on Large Language Model Representing Knowledge
with Directed Graph Structures [2.3698227130544547]
We propose a task planning method that combines human expertise with an LLM and have designed an LLM prompt template, Think_Net_Prompt.
We further propose a method to progressively decompose tasks and generate a task tree to reduce the planning volume for each task.
arXiv Detail & Related papers (2023-06-08T13:10:00Z) - Decomposed Prompting: A Modular Approach for Solving Complex Tasks [55.42850359286304]
We propose Decomposed Prompting to solve complex tasks by decomposing them (via prompting) into simpler sub-tasks.
This modular structure allows each prompt to be optimized for its specific sub-task.
We show that the flexibility and modularity of Decomposed Prompting allows it to outperform prior work on few-shot prompting.
arXiv Detail & Related papers (2022-10-05T17:28:20Z) - Dynamic Multi-Robot Task Allocation under Uncertainty and Temporal
Constraints [52.58352707495122]
We present a multi-robot allocation algorithm that decouples the key computational challenges of sequential decision-making under uncertainty and multi-agent coordination.
We validate our results over a wide range of simulations on two distinct domains: multi-arm conveyor belt pick-and-place and multi-drone delivery dispatch in a city.
arXiv Detail & Related papers (2020-05-27T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.