Learning-Augmented Maximum Flow
- URL: http://arxiv.org/abs/2207.12911v1
- Date: Tue, 26 Jul 2022 14:02:41 GMT
- Title: Learning-Augmented Maximum Flow
- Authors: Adam Polak, Maksym Zub
- Abstract summary: We propose a framework for speeding up maximum flow computation by using predictions.
We present an algorithm that, given an $m$-edge flow network and a predicted flow, computes a maximum flow in $O(meta)$ time.
- Score: 3.0712335337791288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a framework for speeding up maximum flow computation by using
predictions. A prediction is a flow, i.e., an assignment of non-negative flow
values to edges, which satisfies the flow conservation property, but does not
necessarily respect the edge capacities of the actual instance (since these
were unknown at the time of learning). We present an algorithm that, given an
$m$-edge flow network and a predicted flow, computes a maximum flow in
$O(m\eta)$ time, where $\eta$ is the $\ell_1$ error of the prediction, i.e.,
the sum over the edges of the absolute difference between the predicted and
optimal flow values. Moreover, we prove that, given an oracle access to a
distribution over flow networks, it is possible to efficiently PAC-learn a
prediction minimizing the expected $\ell_1$ error over that distribution. Our
results fit into the recent line of research on learning-augmented algorithms,
which aims to improve over worst-case bounds of classical algorithms by using
predictions, e.g., machine-learned from previous similar instances. So far, the
main focus in this area was on improving competitive ratios for online
problems. Following Dinitz et al. (NeurIPS 2021), our results are one of the
firsts to improve the running time of an offline problem.
Related papers
- Competitive strategies to use "warm start" algorithms with predictions [12.970501425097645]
We consider the problem of learning and using predictions for warm start algorithms with predictions.
In this setting, an algorithm is given an instance of a problem, and a prediction of the solution.
We give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions.
arXiv Detail & Related papers (2024-05-06T17:38:20Z) - Constrained Online Two-stage Stochastic Optimization: Algorithm with
(and without) Predictions [19.537289123577022]
We consider an online two-stage optimization with long-term constraints over a finite horizon of $T$ periods.
We develop online algorithms for the online two-stage problem from adversarial learning algorithms.
arXiv Detail & Related papers (2024-01-02T07:46:33Z) - Towards Understanding and Improving GFlowNet Training [71.85707593318297]
We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution.
We propose prioritized replay training of high-reward $x$, relative edge flow policy parametrization, and a novel guided trajectory balance objective.
arXiv Detail & Related papers (2023-05-11T22:50:41Z) - Scheduling with Speed Predictions [10.217102227210669]
We study the speed-robust scheduling problem where the speeds of the machines, instead of the processing times of the jobs, are unknown.
Our main result is an algorithm that achieves a $mineta2 (1+epsilon)2 (1+epsilon)(2 + 2/alpha)$ approximation.
When the predictions are accurate, this approximation improves over the previously best known approximation of $2-1/m$ for speed-robust scheduling.
arXiv Detail & Related papers (2022-05-02T23:39:41Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Online Multivalid Learning: Means, Moments, and Prediction Intervals [16.75129633574157]
We present a technique for providing contextual predictions that are "multivalid" in various senses.
The resulting estimates correctly predict various statistics of the labels $y$ not just marginally.
Because our algorithms handle adversarially chosen examples, they can equally well be used to predict statistics of the residuals of arbitrary point prediction methods.
arXiv Detail & Related papers (2021-01-05T19:08:11Z) - Leveraging Predictions in Smoothed Online Convex Optimization via
Gradient-based Algorithms [18.64335888217192]
We consider online convex optimization with time-varying stage costs and additional switching costs.
Since the switching costs introduce coupling across all stages, long-term predictions tend to suffer from lower quality.
We introduce a gradient-based online algorithm, Receding Horizon Inexact Gradient (RHIG), and analyze its performance by dynamic regrets.
arXiv Detail & Related papers (2020-11-25T06:25:51Z) - Consistent Structured Prediction with Max-Min Margin Markov Networks [84.60515484036239]
Max-margin methods for binary classification have been extended to the structured prediction setting under the name of max-margin Markov networks ($M3N$)
We overcome such limitations by defining the learning problem in terms of a "max-min" margin formulation, naming the resulting method max-min margin Markov networks ($M4N$)
Experiments on multi-class classification, ordinal regression, sequence prediction and ranking demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-02T10:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.