Programming over Thinking: Efficient and Robust Multi-Constraint Planning
- URL: http://arxiv.org/abs/2601.09097v1
- Date: Wed, 14 Jan 2026 02:58:07 GMT
- Title: Programming over Thinking: Efficient and Robust Multi-Constraint Planning
- Authors: Derrick Goh Xin Deik, Quanyu Long, Zhengyuan Liu, Nancy F. Chen, Wenya Wang,
- Abstract summary: SCOPE is a framework that disentangles query-specific reasoning from generic code execution.<n>SCOPE achieves state-of-the-art performance while lowering cost and latency.
- Score: 54.77940831026738
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-constraint planning involves identifying, evaluating, and refining candidate plans while satisfying multiple, potentially conflicting constraints. Existing large language model (LLM) approaches face fundamental limitations in this domain. Pure reasoning paradigms, which rely on long natural language chains, are prone to inconsistency, error accumulation, and prohibitive cost as constraints compound. Conversely, LLMs combined with coding- or solver-based strategies lack flexibility: they often generate problem-specific code from scratch or depend on fixed solvers, failing to capture generalizable logic across diverse problems. To address these challenges, we introduce the Scalable COde Planning Engine (SCOPE), a framework that disentangles query-specific reasoning from generic code execution. By separating reasoning from execution, SCOPE produces solver functions that are consistent, deterministic, and reusable across queries while requiring only minimal changes to input parameters. SCOPE achieves state-of-the-art performance while lowering cost and latency. For example, with GPT-4o, it reaches 93.1% success on TravelPlanner, a 61.6% gain over the best baseline (CoT) while cutting inference cost by 1.4x and time by ~4.67x. Code is available at https://github.com/DerrickGXD/SCOPE.
Related papers
- G-CSEA: A Graph-Based Conflict Set Extraction Algorithm for Identifying Infeasibility in Pseudo-Boolean Models [0.5773629510985792]
A common diagnostic approach is to compute Irreducible Infeasible Subsets (IIS)<n>We consider models formulated using pseudo-Boolean constraints with inequality relations over binary variables.<n>Our method constructs an implication graph during constraint propagation and, upon detecting a conflict, traces all contributing constraints across both decision branches.
arXiv Detail & Related papers (2025-09-16T16:09:30Z) - Efficient LLM Collaboration via Planning [56.081879390960204]
Small and large models take turns acting as planner and executor, exchanging plans in a multi-stage cascade to collaboratively solve tasks.<n>We demonstrate that COPE achieves performance comparable to large proprietary models, while drastically reducing the inference API cost.
arXiv Detail & Related papers (2025-06-13T08:35:50Z) - "Don't Do That!": Guiding Embodied Systems through Large Language Model-based Constraint Generation [40.61171036032532]
Large language models (LLMs) have spurred interest in robotic navigation that incorporates complex constraints from natural language into the planning problem.<n>In this paper, we propose a constraint generation framework that uses LLMs to translate constraints into Python functions.<n>We show that these LLM-generated functions accurately describe even complex mathematical constraints, and apply them to point cloud representations with traditional search algorithms.
arXiv Detail & Related papers (2025-06-04T22:47:53Z) - Syzygy of Thoughts: Improving LLM CoT with the Minimal Free Resolution [59.39066657300045]
Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps.<n>We propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths.<n>SoT captures deeper logical dependencies, enabling more robust and structured problem-solving.
arXiv Detail & Related papers (2025-04-13T13:35:41Z) - Fast Controlled Generation from Language Models with Adaptive Weighted Rejection Sampling [90.86991492288487]
evaluating constraint on every token can be prohibitively expensive.<n> LCD can distort the global distribution over strings, sampling tokens based only on local information.<n>We show that our approach is superior to state-of-the-art baselines.
arXiv Detail & Related papers (2025-04-07T18:30:18Z) - Mitigating Cross-Modal Distraction and Ensuring Geometric Feasibility via Affordance-Guided and Self-Consistent MLLMs for Task Planning in Instruction-Following Manipulation [5.903105418868711]
We introduce textbfQuARC (Quantity, Analysis, Relative positioning, Collision), a new benchmark based on a food preparation scenario.<n>We tackle two major limitations of current MLLMs: cross-modal distraction and geometric infeasibility.<n>Our method achieves a 76.7% success rate on the benchmark, significantly outperforming the ViLa baseline.
arXiv Detail & Related papers (2025-03-17T11:01:02Z) - PlanGEN: A Multi-Agent Framework for Generating Planning and Reasoning Trajectories for Complex Problem Solving [89.60370366013142]
We propose PlanGEN, a model-agnostic and easily scalable agent framework with three key components: constraint, verification, and selection agents.<n>Specifically, our approach proposes constraint-guided iterative verification to enhance performance of inference-time algorithms.
arXiv Detail & Related papers (2025-02-22T06:21:56Z) - Interactive and Expressive Code-Augmented Planning with Large Language Models [62.799579304821826]
Large Language Models (LLMs) demonstrate strong abilities in common-sense reasoning and interactive decision-making.
Recent techniques have sought to structure LLM outputs using control flow and other code-adjacent techniques to improve planning performance.
We propose REPL-Plan, an LLM planning approach that is fully code-expressive and dynamic.
arXiv Detail & Related papers (2024-11-21T04:23:17Z) - Planning Anything with Rigor: General-Purpose Zero-Shot Planning with LLM-based Formalized Programming [13.246017517159043]
Large language models (LLMs) have recently demonstrated strong potential in solving planning problems.<n>We propose LLpreview, a framework that leverages LLMs to capture key information from planning problems and formally formulate and solve them as optimization problems from scratch.<n>We apply LLpreview to 9 planning problems, ranging from multi-constraint decision making to multi-step planning problems, and demonstrate that LL achieves on average 83.7% and 86.8% optimal rate across 9 tasks for GPTo and Claude 3.5 Sonnet.
arXiv Detail & Related papers (2024-10-15T23:20:54Z) - Unlocking Reasoning Potential in Large Langauge Models by Scaling Code-form Planning [94.76546523689113]
We introduce CodePlan, a framework that generates and follows textcode-form plans -- pseudocode that outlines high-level, structured reasoning processes.
CodePlan effectively captures the rich semantics and control flows inherent to sophisticated reasoning tasks.
It achieves a 25.1% relative improvement compared with directly generating responses.
arXiv Detail & Related papers (2024-09-19T04:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.