Where to Search: Measure the Prior-Structured Search Space of LLM Agents
- URL: http://arxiv.org/abs/2510.14846v3
- Date: Mon, 03 Nov 2025 10:52:10 GMT
- Title: Where to Search: Measure the Prior-Structured Search Space of LLM Agents
- Authors: Zhuo-Yang Song,
- Abstract summary: This paper proposes a compact formal theory that describes and measures LLM-assisted iterative search guided by domain priors.<n>We represent an agent as a fuzzy relation operator on inputs and outputs to capture feasible transitions.<n>We provide the simplest testable inferences and validate them via two instantiation.
- Score: 0.8249180979158818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generate-filter-refine (iterative paradigm) based on large language models (LLMs) has achieved progress in reasoning, programming, and program discovery in AI+Science. However, the effectiveness of search depends on where to search, namely, how to encode the domain prior into an operationally structured hypothesis space. To this end, this paper proposes a compact formal theory that describes and measures LLM-assisted iterative search guided by domain priors. We represent an agent as a fuzzy relation operator on inputs and outputs to capture feasible transitions; the agent is thereby constrained by a fixed safety envelope. To describe multi-step reasoning/search, we weight all reachable paths by a single continuation parameter and sum them to obtain a coverage generating function; this induces a measure of reachability difficulty; and it provides a geometric interpretation of search on the graph induced by the safety envelope. We further provide the simplest testable inferences and validate them via two instantiation. This theory offers a workable language and operational tools to measure agents and their search spaces, proposing a systematic formal description of iterative search constructed by LLMs.
Related papers
- Beyond Monolithic Architectures: A Multi-Agent Search and Knowledge Optimization Framework for Agentic Search [56.78490647843876]
Agentic search has emerged as a promising paradigm for complex information seeking by enabling Large Language Models (LLMs) to interleave reasoning with tool use.<n>We propose bfM-ASK, a framework that explicitly decouples agentic search into two complementary roles: Search Behavior Agents, which plan and execute search actions, and Knowledge Management Agents, which aggregate, filter, and maintain a compact internal context.
arXiv Detail & Related papers (2026-01-08T08:13:27Z) - Advancing Mathematical Research via Human-AI Interactive Theorem Proving [16.40852561664514]
We propose a human-in-the-loop workflow for interactive theorem proving and discovery with LLMs.<n>Human experts retain control over problem formulation and admissible assumptions, while the model searches for proofs or contradictions.<n>We instantiate this workflow in a case study on the connection between manifold optimization and Grover's quantum search algorithm.
arXiv Detail & Related papers (2025-12-10T09:16:27Z) - Unifying Tree Search Algorithm and Reward Design for LLM Reasoning: A Survey [92.71325249013535]
Deliberative tree search is a cornerstone of Large Language Model (LLM) research.<n>This paper introduces a unified framework that deconstructs search algorithms into three core components.
arXiv Detail & Related papers (2025-10-11T03:29:18Z) - Large Language Models in Operations Research: Methods, Applications, and Challenges [9.208082097215314]
Operations research (OR) supports complex system decision-making, with broad applications in transportation, supply chain management, and production scheduling.<n>Traditional approaches that rely on expert-driven modeling and manual parameter tuning often struggle with large-scale, dynamic, and multi-constraint problems.<n>This paper systematically reviews progress in applying large language models (LLMs) to OR, categorizing existing methods into three pathways: automatic modeling, auxiliary optimization, and direct solving.
arXiv Detail & Related papers (2025-09-18T01:52:19Z) - Iterative Self-Incentivization Empowers Large Language Models as Agentic Searchers [74.17516978246152]
Large language models (LLMs) have been widely integrated into information retrieval to advance traditional techniques.<n>We propose EXSEARCH, an agentic search framework, where the LLM learns to retrieve useful information as the reasoning unfolds.<n>Experiments on four knowledge-intensive benchmarks show that EXSEARCH substantially outperforms baselines.
arXiv Detail & Related papers (2025-05-26T15:27:55Z) - Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models [52.538586230181814]
Recent studies exploited Large Language Models (LLMs) to autonomously generates for solving Combinatorial Optimization Problems (COPs)<n>The absence of task-specific knowledge in prompts often leads LLMs to provide unspecific search directions, obstructing derivation of well-performings.<n>We propose the Hercules algorithm, which leverages our designed Core Abstraction Prompting (CAP) method to abstract the core components from elite HGs and incorporate them as prior knowledge in prompts.
arXiv Detail & Related papers (2025-05-19T02:20:46Z) - LLM-assisted Vector Similarity Search [0.0]
This paper explores a hybrid approach combining vector similarity search with Large Language Models (LLMs) to enhance search accuracy and relevance.<n> Experiments on structured datasets demonstrate that while vector similarity search alone performs well for straightforward queries, the LLM-assisted approach excels in processing complex queries involving constraints, negations, or conceptual requirements.
arXiv Detail & Related papers (2024-12-25T08:17:37Z) - Enhancing LLM Reasoning with Reward-guided Tree Search [95.06503095273395]
o1-like reasoning approach is challenging, and researchers have been making various attempts to advance this open area of research.<n>We present a preliminary exploration into enhancing the reasoning abilities of LLMs through reward-guided tree search algorithms.
arXiv Detail & Related papers (2024-11-18T16:15:17Z) - Supervised Chain of Thought [5.389461633686935]
Chain of Thought (CoT) prompting offers a promising approach to solving complex reasoning tasks.
One-prompt-for-all approach poses significant challenges for models to generate the correct reasoning steps.
We show how task-specific supervision is essential for navigating the prompt space accurately and achieving optimal performance.
arXiv Detail & Related papers (2024-10-18T06:25:27Z) - FLARE: Faithful Logic-Aided Reasoning and Exploration [47.46564769245296]
We introduce a novel approach for traversing the problem space using task decompositions.<n>We use the Large Language Models to plan a solution, soft-formalise the query into facts and predicates using a logic programming code.<n>Our method allows us to compute the faithfulness of the reasoning process w.r.t. the generated code and analyse the steps of the multi-hop search without relying on external solvers.
arXiv Detail & Related papers (2024-10-14T19:39:11Z) - Uncertainty-Guided Likelihood Tree Search [37.25859935454988]
Tree search is a fundamental tool for planning, as many sequential decision-making problems can be framed as searching over tree-structured spaces.<n>We propose an uncertainty-guided tree search algorithm for settings where the reward function is a log-likelihood function of the paths.
arXiv Detail & Related papers (2024-07-04T14:08:50Z) - Factorization of Multi-Agent Sampling-Based Motion Planning [72.42734061131569]
Modern robotics often involves multiple embodied agents operating within a shared environment.
Standard sampling-based algorithms can be used to search for solutions in the robots' joint space.
We integrate the concept of factorization into sampling-based algorithms, which requires only minimal modifications to existing methods.
We present a general implementation of a factorized SBA, derive an analytical gain in terms of sample complexity for PRM*, and showcase empirical results for RRG.
arXiv Detail & Related papers (2023-04-01T15:50:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.