Learning heuristics for A*
- URL: http://arxiv.org/abs/2204.08938v1
- Date: Mon, 11 Apr 2022 10:13:53 GMT
- Title: Learning heuristics for A*
- Authors: Danilo Numeroso, Davide Bacciu, Petar Veli\v{c}kovi\'c
- Abstract summary: We combine recent advancements in Neuralic Reasoning to learn efficient functions for path finding problems on graphs.
We exploit multi-task learning to learn Dijkstra's algorithm and a consistent function for the A* search algorithm.
Results show that running A* over the learnts value can greatly speed up target node searching compared to Dijkstra.
- Score: 9.701208207491879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Path finding in graphs is one of the most studied classes of problems in
computer science. In this context, search algorithms are often extended with
heuristics for a more efficient search of target nodes. In this work we combine
recent advancements in Neural Algorithmic Reasoning to learn efficient
heuristic functions for path finding problems on graphs. At training time, we
exploit multi-task learning to learn jointly the Dijkstra's algorithm and a
consistent heuristic function for the A* search algorithm. At inference time,
we plug our learnt heuristics into the A* algorithm. Results show that running
A* over the learnt heuristics value can greatly speed up target node searching
compared to Dijkstra, while still finding minimal-cost paths.
Related papers
- LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - A Cover Time Study of a non-Markovian Algorithm [18.23492383352517]
We show that negative feedback strategy (a count-based exploration method) is better than the naive random walk search.
It also achieves smaller cover times for special but important graphs, including clique graphs, tree graphs, etc.
arXiv Detail & Related papers (2023-06-08T03:09:49Z) - A Theoretical Analysis Of Nearest Neighbor Search On Approximate Near
Neighbor Graph [51.880164098926166]
Graph-based algorithms have demonstrated state-of-the-art performance in the nearest neighbor search (NN-Search) problem.
There exists a practice-to-theory gap in the graph-based NN-Search algorithms.
We present theoretical guarantees of solving NN-Search via greedy search on ANN-Graph for low dimensional and dense vectors.
arXiv Detail & Related papers (2023-03-10T21:18:34Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - The CLRS Algorithmic Reasoning Benchmark [28.789225199559834]
Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.
We propose the CLRS Algorithmic Reasoning Benchmark, covering classical algorithms from the Introduction to Algorithms textbook.
Our benchmark spans a variety of algorithmic reasoning procedures, including sorting, searching, dynamic programming, graph algorithms, string algorithms and geometric algorithms.
arXiv Detail & Related papers (2022-05-31T09:56:44Z) - How to transfer algorithmic reasoning knowledge to learn new algorithms? [23.335939830754747]
We investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not.
We create a dataset including 9 algorithms and 3 different graph types.
We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
arXiv Detail & Related papers (2021-10-26T22:14:47Z) - A* Search Without Expansions: Learning Heuristic Functions with Deep
Q-Networks [70.0197695666261]
We introduce Q* search, a search algorithm that uses deep Q-networks to guide search in order to take advantage of the sum of the transition costs and values of the children of a node.
We use Q* search to solve the Rubik's cube when formulated with a large action space that includes 1872 meta-actions.
Q* search is up to 129 times faster and generates up to 1288 times fewer nodes than A* search.
arXiv Detail & Related papers (2021-02-08T20:36:41Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.