A Formalism for Optimal Search with Dynamic Heuristics
- URL: http://arxiv.org/abs/2504.21131v1
- Date: Tue, 29 Apr 2025 19:25:31 GMT
- Title: A Formalism for Optimal Search with Dynamic Heuristics
- Authors: Remo Christen, Florian Pommerening, Clemens Büchner, Malte Helmert,
- Abstract summary: We formalize the idea of dynamic algorithms and use them in a generic algorithm framework.<n>We study a particular instantiation that models $mathrmA*$ with dynamics and show optimal generality results.
- Score: 7.807210884802377
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While most heuristics studied in heuristic search depend only on the state, some accumulate information during search and thus also depend on the search history. Various existing approaches use such dynamic heuristics in $\mathrm{A}^*$-like algorithms and appeal to classic results for $\mathrm{A}^*$ to show optimality. However, doing so ignores the complexities of searching with a mutable heuristic. In this paper we formalize the idea of dynamic heuristics and use them in a generic algorithm framework. We study a particular instantiation that models $\mathrm{A}^*$ with dynamic heuristics and show general optimality results. Finally we show how existing approaches from classical planning can be viewed as special cases of this instantiation, making it possible to directly apply our optimality results.
Related papers
- Modeling Local Search Metaheuristics Using Markov Decision Processes [0.0]
We introduce a theoretical framework based on Markov Decision Processes (MDP) for analyzing local search metaheuristics.
This framework not only helps in providing convergence results for individual algorithms, but also provides an explicit characterization of the exploration-exploitation tradeoff.
arXiv Detail & Related papers (2024-07-29T11:28:30Z) - Optimize Planning Heuristics to Rank, not to Estimate Cost-to-Goal [0.9217021281095907]
In imitation learning for planning, parameters of functions are optimized against a set of solved problem instances.
It then proposes a family of loss functions based on ranking tailored for a given variant of the forward search algorithm.
The experimental comparison on a diverse set of problems unequivocally supports the derived theory.
arXiv Detail & Related papers (2023-10-30T11:39:49Z) - A Multi-Heuristic Search-based Motion Planning for Automated Parking [0.0]
In unstructured environments like parking lots or construction sites, it is challenging to achieve real-time planning.
We are adopting a Multi-Heuristic Search approach, that enables the use of multiple functions and their individual advantages.
The Multi-Heuristic A* algorithm is benchmarked against a very popular search-based algorithm, Hybrid A*.
arXiv Detail & Related papers (2023-07-15T17:33:06Z) - Optimistic Active Exploration of Dynamical Systems [52.91573056896633]
We develop an algorithm for active exploration called OPAX.
We show how OPAX can be reduced to an optimal control problem that can be solved at each episode.
Our experiments show that OPAX is not only theoretically sound but also performs well for zero-shot planning on novel downstream tasks.
arXiv Detail & Related papers (2023-06-21T16:26:59Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Run Time Analysis for Random Local Search on Generalized Majority
Functions [1.0965065178451106]
We study how one of its properties -- neutrality -- influences the run time of random local search.
We provide a theorem, applicable to many optimization algorithms, that links the run time of MAJORITY with its symmetric version HASMAJORITY.
arXiv Detail & Related papers (2022-04-27T08:40:33Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning Heuristic Selection with Dynamic Algorithm Configuration [44.91083687014879]
We show that dynamic algorithm configuration can be used for dynamic selection dynamics of a planning system.
We show that this approach generalizes over existing approaches and that it can exponentially improve performance of the domain search.
arXiv Detail & Related papers (2020-06-15T09:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.