Incremental Approximate Single-Source Shortest Paths with Predictions
- URL: http://arxiv.org/abs/2502.08125v1
- Date: Wed, 12 Feb 2025 05:06:23 GMT
- Title: Incremental Approximate Single-Source Shortest Paths with Predictions
- Authors: Samuel McCauley, Benjamin Moseley, Aidin Niaparast, Helia Niaparast, Shikha Singh,
- Abstract summary: We study the fundamental data structure problem of maintaining approximate shortest paths in incremental graphs.
We show these techniques immediately extend to the all-pairs shortest-path setting as well.
Our algorithms are consistent (performing nearly as fast as the offline algorithm) when predictions are nearly perfect.
- Score: 10.749658550545238
- License:
- Abstract: The algorithms-with-predictions framework has been used extensively to develop online algorithms with improved beyond-worst-case competitive ratios. Recently, there is growing interest in leveraging predictions for designing data structures with improved beyond-worst-case running times. In this paper, we study the fundamental data structure problem of maintaining approximate shortest paths in incremental graphs in the algorithms-with-predictions model. Given a sequence $\sigma$ of edges that are inserted one at a time, the goal is to maintain approximate shortest paths from the source to each vertex in the graph at each time step. Before any edges arrive, the data structure is given a prediction of the online edge sequence $\hat{\sigma}$ which is used to ``warm start'' its state. As our main result, we design a learned algorithm that maintains $(1+\epsilon)$-approximate single-source shortest paths, which runs in $\tilde{O}(m \eta \log W/\epsilon)$ time, where $W$ is the weight of the heaviest edge and $\eta$ is the prediction error. We show these techniques immediately extend to the all-pairs shortest-path setting as well. Our algorithms are consistent (performing nearly as fast as the offline algorithm) when predictions are nearly perfect, have a smooth degradation in performance with respect to the prediction error and, in the worst case, match the best offline algorithm up to logarithmic factors. As a building block, we study the offline incremental approximate single-source shortest-paths problem. In this problem, the edge sequence $\sigma$ is known a priori and the goal is to efficiently return the length of the shortest paths in the intermediate graph $G_t$ consisting of the first $t$ edges, for all $t$. Note that the offline incremental problem is defined in the worst-case setting (without predictions) and is of independent interest.
Related papers
- The Power of Graph Sparsification in the Continual Release Model [48.65946341463292]
We make novel use of sparsification techniques from the non-private streaming and static graph algorithms to achieve new results in the sublinear space, continual release setting.
We conclude with additive error lower bounds for edge-privacy in the fully dynamic setting.
arXiv Detail & Related papers (2024-07-24T20:15:07Z) - Competitive strategies to use "warm start" algorithms with predictions [12.970501425097645]
We consider the problem of learning and using predictions for warm start algorithms with predictions.
In this setting, an algorithm is given an instance of a problem, and a prediction of the solution.
We give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions.
arXiv Detail & Related papers (2024-05-06T17:38:20Z) - Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Non-stationary Online Convex Optimization with Arbitrary Delays [50.46856739179311]
This paper investigates the delayed online convex optimization (OCO) in non-stationary environments.
We first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order.
We develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to $O(sqrtbardT(P_T+1))$.
arXiv Detail & Related papers (2023-05-20T07:54:07Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - A Generalization of the Shortest Path Problem to Graphs with Multiple
Edge-Cost Estimates [13.046825574678579]
The shortest path problem in graphs is a cornerstone of AI theory and applications.
We present a framework for weighted directed graphs, where edge weight can be computed (estimated) multiple times.
We then present two complete algorithms for the generalized problem, and empirically demonstrate their efficacy.
arXiv Detail & Related papers (2022-08-22T22:07:27Z) - Sketching Algorithms and Lower Bounds for Ridge Regression [65.0720777731368]
We give a sketching-based iterative algorithm that computes $1+varepsilon$ approximate solutions for the ridge regression problem.
We also show that this algorithm can be used to give faster algorithms for kernel ridge regression.
arXiv Detail & Related papers (2022-04-13T22:18:47Z) - Quantum Algorithm for the Longest Trail Problem [0.0]
We present the quantum algorithm for the Longest Trail Problem.
The running time of our algorithm is $O* (1.728m)$.
arXiv Detail & Related papers (2021-12-27T09:42:19Z) - Sublinear Time and Space Algorithms for Correlation Clustering via
Sparse-Dense Decompositions [9.29659220237395]
We present a new approach for solving (minimum disagreement) correlation clustering.
We obtain sublinear algorithms with highly efficient time and space complexity.
The main ingredient of our approach is a novel connection to sparse-dense graph decompositions.
arXiv Detail & Related papers (2021-09-29T16:25:02Z) - Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean
Estimation [58.24280149662003]
We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset.
We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees.
arXiv Detail & Related papers (2021-06-16T03:34:14Z) - Regret Bounds for Stochastic Shortest Path Problems with Linear Function
Approximation [29.549633678730054]
We propose two algorithms for episodic shortest path problems with linear function approximation.
The first is computationally expensive but provably obtains $tildeO (sqrtB_star3 d3 K/cmin )$ regret, where $B_star$ is a upper bound on the optimal cost-to-go function.
The second is computationally efficient in practice, and we conjecture that it obtains the same regret bound.
arXiv Detail & Related papers (2021-05-04T16:05:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.