A Cover Time Study of a non-Markovian Algorithm
- URL: http://arxiv.org/abs/2306.04902v2
- Date: Fri, 11 Aug 2023 05:27:05 GMT
- Title: A Cover Time Study of a non-Markovian Algorithm
- Authors: Guanhua Fang, Gennady Samorodnitsky, Zhiqiang Xu
- Abstract summary: We show that negative feedback strategy (a count-based exploration method) is better than the naive random walk search.
It also achieves smaller cover times for special but important graphs, including clique graphs, tree graphs, etc.
- Score: 18.23492383352517
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given a traversal algorithm, cover time is the expected number of steps
needed to visit all nodes in a given graph. A smaller cover time means a higher
exploration efficiency of traversal algorithm. Although random walk algorithms
have been studied extensively in the existing literature, there has been no
cover time result for any non-Markovian method. In this work, we stand on a
theoretical perspective and show that the negative feedback strategy (a
count-based exploration method) is better than the naive random walk search. In
particular, the former strategy can locally improve the search efficiency for
an arbitrary graph. It also achieves smaller cover times for special but
important graphs, including clique graphs, tree graphs, etc. Moreover, we make
connections between our results and reinforcement learning literature to give
new insights on why classical UCB and MCTS algorithms are so useful. Various
numerical results corroborate our theoretical findings.
Related papers
- SLOPE: Search with Learned Optimal Pruning-based Expansion [2.0618817976970103]
We present the Search with Learned Optimal Pruning-based Expansion (SLOPE)
It learns the distance of a node from a possible optimal path, which in turn reduces the size of the open list.
This ensures that the search explores only the region close to optimal paths while lowering memory and computational costs.
arXiv Detail & Related papers (2024-06-07T13:42:15Z) - Learning-Based Algorithms for Graph Searching Problems [6.923787372512553]
We consider the problem of graph searching with prediction recently introduced by Banerjee et al.
In this problem, an agent, starting at some $r$ has to traverse a (potentially unknown) graph $G$ to find a hidden goal node $g$.
We design algorithms for this search task on unknown graphs.
arXiv Detail & Related papers (2024-02-27T18:12:58Z) - A Theoretical Analysis Of Nearest Neighbor Search On Approximate Near
Neighbor Graph [51.880164098926166]
Graph-based algorithms have demonstrated state-of-the-art performance in the nearest neighbor search (NN-Search) problem.
There exists a practice-to-theory gap in the graph-based NN-Search algorithms.
We present theoretical guarantees of solving NN-Search via greedy search on ANN-Graph for low dimensional and dense vectors.
arXiv Detail & Related papers (2023-03-10T21:18:34Z) - Causal Bandits without Graph Learning [28.021500949026766]
We develop an efficient algorithm for finding the parent node of the reward node using atomic interventions.
We extend our algorithm to the case when the reward node has multiple parents.
arXiv Detail & Related papers (2023-01-26T20:27:14Z) - Learning Graph Search Heuristics [48.83557172525969]
We present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigations from data.
Our function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time.
Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5% on average.
arXiv Detail & Related papers (2022-12-07T22:28:00Z) - Learning heuristics for A* [9.701208207491879]
We combine recent advancements in Neuralic Reasoning to learn efficient functions for path finding problems on graphs.
We exploit multi-task learning to learn Dijkstra's algorithm and a consistent function for the A* search algorithm.
Results show that running A* over the learnts value can greatly speed up target node searching compared to Dijkstra.
arXiv Detail & Related papers (2022-04-11T10:13:53Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z) - Adversarial Online Learning with Changing Action Sets: Efficient
Algorithms with Approximate Regret Bounds [48.312484940846]
We revisit the problem of online learning with sleeping experts/bandits.
In each time step, only a subset of the actions are available for the algorithm to choose from.
We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems.
arXiv Detail & Related papers (2020-03-07T02:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.