Code-Driven Planning in Grid Worlds with Large Language Models
- URL: http://arxiv.org/abs/2505.10749v1
- Date: Thu, 15 May 2025 23:23:31 GMT
- Title: Code-Driven Planning in Grid Worlds with Large Language Models
- Authors: Ashwath Vaithinathan Aravindan, Zhisheng Tang, Mayank Kejriwal,
- Abstract summary: We propose an iterative programmatic planning framework for solving grid-based tasks by synthesizing interpretable agent policies expressed in code.<n>Instead of relying on traditional search or reinforcement learning, our approach uses code generation as policy synthesis.
- Score: 2.6080756513915824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an iterative programmatic planning (IPP) framework for solving grid-based tasks by synthesizing interpretable agent policies expressed in code using large language models (LLMs). Instead of relying on traditional search or reinforcement learning, our approach uses code generation as policy synthesis, where the LLM outputs executable programs that map environment states to action sequences. Our proposed architecture incorporates several prompting strategies, including direct code generation, pseudocode-conditioned refinement, and curriculum-based prompting, but also includes an iterative refinement mechanism that updates code based on task performance feedback. We evaluate our approach using six leading LLMs and two challenging grid-based benchmarks (GRASP and MiniGrid). Our IPP framework demonstrates improvements over direct code generation ranging from 10\% to as much as 10x across five of the six models and establishes a new state-of-the-art result for GRASP. IPP is found to significantly outperform direct elicitation of a solution from GPT-o3-mini (by 63\% on MiniGrid to 116\% on GRASP), demonstrating the viability of the overall approach. Computational costs of all code generation approaches are similar. While code generation has a higher initial prompting cost compared to direct solution elicitation (\$0.08 per task vs. \$0.002 per instance for GPT-o3-mini), the code can be reused for any number of instances, making the amortized cost significantly lower (by 400x on GPT-o3-mini across the complete GRASP benchmark).
Related papers
- Enhancing LLM-Based Code Generation with Complexity Metrics: A Feedback-Driven Approach [6.289275189295223]
We investigate the relationship between code complexity and the success of Large Language Models generated code.<n>We propose an iterative feedback method, where LLMs are prompted to generate correct code based on complexity metrics from previous failed outputs.<n>Experiment results show that our approach makes notable improvements, particularly with a smaller LLM.
arXiv Detail & Related papers (2025-05-29T19:06:14Z) - Collab: Controlled Decoding using Mixture of Agents for LLM Alignment [90.6117569025754]
Reinforcement learning from human feedback has emerged as an effective technique to align Large Language models.<n>Controlled Decoding provides a mechanism for aligning a model at inference time without retraining.<n>We propose a mixture of agent-based decoding strategies leveraging the existing off-the-shelf aligned LLM policies.
arXiv Detail & Related papers (2025-03-27T17:34:25Z) - Modularization is Better: Effective Code Generation with Modular Prompting [9.955541341324007]
We propose a novel prompting technique, called MoT, to enhance the code generation performance of Large Language Models.<n>MoT exploits modularization principles to decompose complex programming problems into smaller, independent reasoning steps.<n>It structures the reasoning process using an MLR Graph, which hierarchically organizes reasoning steps.
arXiv Detail & Related papers (2025-03-16T12:23:23Z) - Unveiling the Potential of Multimodal Retrieval Augmented Generation with Planning [5.205803766626321]
Multimodal Retrieval Augmented Generation (MRAG) systems often rely on rigid, single-step retrieval methods.<n>We present CogPlanner, a versatile framework inspired by human cognitive processes.<n>CogPlanner iteratively refines queries and selects retrieval strategies, enabling both parallel and sequential modeling approaches.
arXiv Detail & Related papers (2025-01-26T10:16:42Z) - Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - PerfCodeGen: Improving Performance of LLM Generated Code with Execution Feedback [78.89596149768458]
Large Language Models (LLMs) are widely adopted for assisting in software development tasks.<n>We propose PerfCodeGen, a training-free framework that enhances the performance of LLM-generated code.
arXiv Detail & Related papers (2024-11-18T06:22:38Z) - Chain-of-Programming (CoP) : Empowering Large Language Models for Geospatial Code Generation [2.6026969939746705]
This paper proposes a Chain of Programming framework to decompose the code generation process into five steps.
The framework incorporates a shared information pool, knowledge base retrieval, and user feedback mechanisms.
It significantly improves the logical clarity, syntactical correctness, and executability of the generated code.
arXiv Detail & Related papers (2024-11-16T09:20:35Z) - Automated Prompt Engineering for Cost-Effective Code Generation Using Evolutionary Algorithm [8.009881267479189]
Large Language Models have seen increasing use in various software development tasks, especially in code generation.<n>We propose an alternative approach named Evolutionary Prompt Engineering for Code (EPiC)<n>EPiC uses a lightweight evolutionary algorithm to refine the original prompts into improved versions that generate high quality code.<n>Our evaluation against state-of-the-art (SOTA) LLM based code generation agents shows that EPiC not only achieves up to 6% improvement in pass@k but is also 2-10 times more cost-effective than the baselines.
arXiv Detail & Related papers (2024-08-20T21:15:36Z) - GRASP: A Grid-Based Benchmark for Evaluating Commonsense Spatial Reasoning [2.9312156642007294]
We construct a large-scale benchmark called GRASP, which consists of 16,000 grid-based environments where the agent is tasked with an energy collection problem.<n>We compare classic baseline approaches, such as random walk and greedy search methods, with advanced LLMs like GPT-3.5-Turbo, GPT-4o, and GPT-o1-mini.<n>The experimental results indicate that even advanced LLMs struggle to consistently achieve satisfactory solutions.
arXiv Detail & Related papers (2024-07-02T02:27:46Z) - SOEN-101: Code Generation by Emulating Software Process Models Using Large Language Model Agents [50.82665351100067]
FlowGen is a code generation framework that emulates software process models based on multiple Large Language Model (LLM) agents.
We evaluate FlowGenScrum on four benchmarks: HumanEval, HumanEval-ET, MBPP, and MBPP-ET.
arXiv Detail & Related papers (2024-03-23T14:04:48Z) - CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules [51.82044734879657]
We propose CodeChain, a novel framework for inference that elicits modularized code generation through a chain of self-revisions.
We find that CodeChain can significantly boost both modularity as well as correctness of the generated solutions, achieving relative pass@1 improvements of 35% on APPS and 76% on CodeContests.
arXiv Detail & Related papers (2023-10-13T10:17:48Z) - Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes [54.13559879916708]
EVAPORATE is a prototype system powered by large language models (LLMs)<n>Code synthesis is cheap, but far less accurate than directly processing each document with the LLM.<n>We propose an extended code implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction.
arXiv Detail & Related papers (2023-04-19T06:00:26Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.