A $1000\times$ Faster LLM-enhanced Algorithm For Path Planning in Large-scale Grid Maps
- URL: http://arxiv.org/abs/2510.02716v1
- Date: Fri, 03 Oct 2025 04:33:45 GMT
- Title: A $1000\times$ Faster LLM-enhanced Algorithm For Path Planning in Large-scale Grid Maps
- Authors: Junlin Zeng, Xin Zhang, Xiang Zhao, Yan Pan,
- Abstract summary: Existing methods, such as A*, Dijkstra, and their variants, work well for small-scale maps but fail to address large-scale ones due to high search time and memory consumption.<n>LLMs have shown remarkable performance in path planning but still suffer from spatial illusion and poor planning performance.<n>iLLM-A* achieves more than $1000times$ speedup on average, and up to $2349.5times$ speedup in the extreme case.
- Score: 10.23905808645806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Path planning in grid maps, arising from various applications, has garnered significant attention. Existing methods, such as A*, Dijkstra, and their variants, work well for small-scale maps but fail to address large-scale ones due to high search time and memory consumption. Recently, Large Language Models (LLMs) have shown remarkable performance in path planning but still suffer from spatial illusion and poor planning performance. Among all the works, LLM-A* \cite{meng2024llm} leverages LLM to generate a series of waypoints and then uses A* to plan the paths between the neighboring waypoints. In this way, the complete path is constructed. However, LLM-A* still suffers from high computational time for large-scale maps. To fill this gap, we conducted a deep investigation into LLM-A* and found its bottleneck, resulting in limited performance. Accordingly, we design an innovative LLM-enhanced algorithm, abbr. as iLLM-A*. iLLM-A* includes 3 carefully designed mechanisms, including the optimization of A*, an incremental learning method for LLM to generate high-quality waypoints, and the selection of the appropriate waypoints for A* for path planning. Finally, a comprehensive evaluation on various grid maps shows that, compared with LLM-A*, iLLM-A* \textbf{1) achieves more than $1000\times$ speedup on average, and up to $2349.5\times$ speedup in the extreme case, 2) saves up to $58.6\%$ of the memory cost, 3) achieves both obviously shorter path length and lower path length standard deviation.}
Related papers
- LLMAP: LLM-Assisted Multi-Objective Route Planning with User Preferences [31.10423199218523]
The rise of large language models (LLMs) has made natural language-driven route planning an emerging research area that encompasses rich user objectives.<n>In this paper, we introduce a novel LLM-as task to comprehend natural language, identify tasks, and extract user preferences.<n>We conduct extensive experiments using 1,000 routing prompts sampled with varying complexity across 14 countries and 27 cities worldwide.
arXiv Detail & Related papers (2025-09-14T02:30:19Z) - Prime the search: Using large language models for guiding geometric task and motion planning by warm-starting tree search [21.42328403783795]
A problem of relocating a set of objects to designated areas amidst movable obstacles can be framed as a Geometric Task and Motion Planning (G-TAMP) problem.<n>Traditional approaches to G-TAMP have relied on domain-independents or on learning from planning experience to guide the search.<n>We propose leveraging Large Language Models (LLMs), which have common sense knowledge acquired from internet-scale data, to guide task planning in G-TAMP problems.
arXiv Detail & Related papers (2025-06-08T09:47:54Z) - GridRoute: A Benchmark for LLM-Based Route Planning with Cardinal Movement in Grid Environments [14.584687937592536]
Large Language Models (LLMs) have demonstrated their potential in planning and reasoning tasks.<n>We propose a comprehensive evaluation benchmark GridRoute to assess how LLMs can take advantage of traditional algorithms.<n>We also propose a novel hybrid prompting technique called Algorithm of Thought (AoT) which introduces traditional algorithms' guidance into prompting.
arXiv Detail & Related papers (2025-05-30T07:40:59Z) - LLM-Advisor: An LLM Benchmark for Cost-efficient Path Planning across Multiple Terrains [27.751399400911932]
Multi-terrain cost-efficient path planning is a crucial task in robot navigation.<n>We develop a prompt-based approach, LLM-Advisor, which leverages large language models (LLMs) as effective advisors for path planning.<n>When suggestions are made, 70.59% of the paths suggested for the A* algorithm, 69.47% for the RRT* algorithm, and 78.70% for the LLM-A* algorithm achieve greater cost efficiency.
arXiv Detail & Related papers (2025-03-03T07:02:10Z) - Towards Efficient Automatic Self-Pruning of Large Language Models [55.90119819642064]
Post-training structured pruning is a promising solution that prunes Large Language Models without the need for retraining.<n>We argue that the key to mitigating this issue lies in accurately determining the pruning rate for each layer.<n>We introduce $textbfSelf-Pruner$ an end-to-end automatic self-pruning framework for LLMs, which efficiently search layer-wise pruning rates.
arXiv Detail & Related papers (2025-02-20T09:59:50Z) - Universal Model Routing for Efficient LLM Inference [69.86195589350264]
Model routing is a technique for reducing the inference cost of large language models (LLMs)<n>We propose UniRoute, a new approach to the problem of dynamic routing, where new, previously unobserved LLMs are available at test time.<n>We show that these are estimates of a theoretically optimal routing rule, and quantify their errors via an excess risk bound.
arXiv Detail & Related papers (2025-02-12T20:30:28Z) - Affordances-Oriented Planning using Foundation Models for Continuous Vision-Language Navigation [64.84996994779443]
We propose a novel Affordances-Oriented Planner for continuous vision-language navigation (VLN) task.
Our AO-Planner integrates various foundation models to achieve affordances-oriented low-level motion planning and high-level decision-making.
Experiments on the challenging R2R-CE and RxR-CE datasets show that AO-Planner achieves state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2024-07-08T12:52:46Z) - LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning [91.95362946266577]
Path planning is a fundamental scientific problem in robotics and autonomous navigation.<n>Traditional algorithms like A* and its variants are capable of ensuring path validity but suffer from significant computational and memory inefficiencies as the state space grows.<n>We propose a new LLM based route planning method that synergistically combines the precise pathfinding capabilities of A* with the global reasoning capability of LLMs.<n>This hybrid approach aims to enhance pathfinding efficiency in terms of time and space complexity while maintaining the integrity of path validity, especially in large-scale scenarios.
arXiv Detail & Related papers (2024-06-20T01:24:30Z) - Scaling Sparse Fine-Tuning to Large Language Models [67.59697720719672]
Large Language Models (LLMs) are difficult to fully fine-tune due to their sheer number of parameters.
We propose SpIEL, a novel sparse finetuning method which maintains an array of parameter indices and the deltas of these parameters relative to their pretrained values.
We show that SpIEL is superior to popular parameter-efficient fine-tuning methods like LoRA in terms of performance and comparable in terms of run time.
arXiv Detail & Related papers (2024-01-29T18:43:49Z) - LLM A*: Human in the Loop Large Language Models Enabled A* Search for Robotics [2.9053316583692146]
This research focuses on how Large Language Models (LLMs) can help with (path) planning for mobile embodied agents such as robots.<n>A novel framework named LLM A*, aims to leverage the commonsense of LLMs, and the utility-optimal A* is proposed to facilitate few-shot near-optimal path planning.<n>This approach takes human feedback on board and renders the entire planning process transparent (akin to a white box') to humans.
arXiv Detail & Related papers (2023-12-04T10:37:58Z) - Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs [67.38165028487242]
We introduce Dynamic Sparse No Training (DSnoT), a training-free fine-tuning approach to fine-tune large language models (LLMs)
Inspired by the Dynamic Sparse Training, DSnoT minimizes the reconstruction error between the dense and sparse LLMs.
Our paper offers fresh insights into how to fine-tune sparse LLMs in an efficient training-free manner and open new venues to scale the great potential of sparsity to LLMs.
arXiv Detail & Related papers (2023-10-13T07:38:52Z) - LLM+P: Empowering Large Language Models with Optimal Planning
Proficiency [46.20085545432116]
Large language models (LLMs) have demonstrated remarkable zero-shot generalization abilities.
classical planners, once a problem is given in a formatted way, can use efficient search algorithms to quickly identify correct, or even optimal, plans.
This paper introduces LLM+P, the first framework that incorporates the strengths of classical planners into LLMs.
arXiv Detail & Related papers (2023-04-22T20:34:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.