Scheduling with Speed Predictions
- URL: http://arxiv.org/abs/2205.01247v1
- Date: Mon, 2 May 2022 23:39:41 GMT
- Title: Scheduling with Speed Predictions
- Authors: Eric Balkanski, Tingting Ou, Clifford Stein, Hao-Ting Wei
- Abstract summary: We study the speed-robust scheduling problem where the speeds of the machines, instead of the processing times of the jobs, are unknown.
Our main result is an algorithm that achieves a $mineta2 (1+epsilon)2 (1+epsilon)(2 + 2/alpha)$ approximation.
When the predictions are accurate, this approximation improves over the previously best known approximation of $2-1/m$ for speed-robust scheduling.
- Score: 10.217102227210669
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithms with predictions is a recent framework that has been used to
overcome pessimistic worst-case bounds in incomplete information settings. In
the context of scheduling, very recent work has leveraged machine-learned
predictions to design algorithms that achieve improved approximation ratios in
settings where the processing times of the jobs are initially unknown. In this
paper, we study the speed-robust scheduling problem where the speeds of the
machines, instead of the processing times of the jobs, are unknown and augment
this problem with predictions.
Our main result is an algorithm that achieves a
$\min\{\eta^2(1+\epsilon)^2(1+\alpha), (1+\epsilon)(2 + 2/\alpha)\}$
approximation, for any constants $\alpha, \epsilon \in (0,1)$, where $\eta \geq
1$ is the prediction error. When the predictions are accurate, this
approximation improves over the previously best known approximation of $2-1/m$
for speed-robust scheduling, where $m$ is the number of machines, while
simultaneously maintaining a worst-case approximation of $(1+\epsilon)(2 +
2/\alpha)$ even when the predictions are wrong. In addition, we obtain improved
approximations for the special cases of equal and infinitesimal job sizes, and
we complement our algorithmic results with lower bounds. Finally, we
empirically evaluate our algorithm against existing algorithms for speed-robust
scheduling.
Related papers
- Competitive strategies to use "warm start" algorithms with predictions [12.970501425097645]
We consider the problem of learning and using predictions for warm start algorithms with predictions.
In this setting, an algorithm is given an instance of a problem, and a prediction of the solution.
We give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions.
arXiv Detail & Related papers (2024-05-06T17:38:20Z) - Fast Optimal Locally Private Mean Estimation via Random Projections [58.603579803010796]
We study the problem of locally private mean estimation of high-dimensional vectors in the Euclidean ball.
We propose a new algorithmic framework, ProjUnit, for private mean estimation.
Our framework is deceptively simple: each randomizer projects its input to a random low-dimensional subspace, normalizes the result, and then runs an optimal algorithm.
arXiv Detail & Related papers (2023-06-07T14:07:35Z) - Sorting and Hypergraph Orientation under Uncertainty with Predictions [0.45880283710344055]
We study learning-augmented algorithms for sorting and hypergraph orientation under uncertainty.
Our algorithms provide improved performance guarantees for accurate predictions while maintaining worst-case guarantees that are best possible without predictions.
arXiv Detail & Related papers (2023-05-16T07:52:08Z) - Mixing predictions for online metric algorithms [34.849039387367455]
We design algorithms that combine predictions and are competitive against such dynamic combinations.
Our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time.
An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
arXiv Detail & Related papers (2023-04-04T13:18:00Z) - Streaming Algorithms for Learning with Experts: Deterministic Versus
Robust [62.98860182111096]
In the online learning with experts problem, an algorithm must make a prediction about an outcome on each of $T$ days (or times)
The goal is to make a prediction with the minimum cost, specifically compared to the best expert in the set.
We show a space lower bound of $widetildeOmegaleft(fracnMRTright)$ for any deterministic algorithm that achieves regret $R$ when the best expert makes $M$ mistakes.
arXiv Detail & Related papers (2023-03-03T04:39:53Z) - Improved Rate of First Order Algorithms for Entropic Optimal Transport [2.1485350418225244]
This paper improves the state-of-the-art rate of a first-order algorithm for solving entropy regularized optimal transport.
We propose an accelerated primal-dual mirror descent algorithm with variance reduction.
Our algorithm may inspire more research to develop accelerated primal-dual algorithms that have rate $widetildeO(n2/epsilon)$ for solving OT.
arXiv Detail & Related papers (2023-01-23T19:13:25Z) - Learning-Augmented Maximum Flow [3.0712335337791288]
We propose a framework for speeding up maximum flow computation by using predictions.
We present an algorithm that, given an $m$-edge flow network and a predicted flow, computes a maximum flow in $O(meta)$ time.
arXiv Detail & Related papers (2022-07-26T14:02:41Z) - Asynchronous Stochastic Optimization Robust to Arbitrary Delays [54.61797739710608]
We consider optimization with delayed gradients where, at each time stept$, the algorithm makes an update using a stale computation - d_t$ for arbitrary delay $d_t gradient.
Our experiments demonstrate the efficacy and robustness of our algorithm in cases where the delay distribution is skewed or heavy-tailed.
arXiv Detail & Related papers (2021-06-22T15:50:45Z) - Higher-order Derivatives of Weighted Finite-state Machines [68.43084108204741]
This work examines the computation of higher-order derivatives with respect to the normalization constant for weighted finite-state machines.
We provide a general algorithm for evaluating derivatives of all orders, which has not been previously described in the literature.
Our algorithm is significantly faster than prior algorithms.
arXiv Detail & Related papers (2021-06-01T19:51:55Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.