Learning-Augmented Online Bipartite Matching in the Random Arrival Order Model
- URL: http://arxiv.org/abs/2511.23388v1
- Date: Fri, 28 Nov 2025 17:31:11 GMT
- Title: Learning-Augmented Online Bipartite Matching in the Random Arrival Order Model
- Authors: Kunanon Burathep, Thomas Erlebach, William K. Moses,
- Abstract summary: We study the online unweighted bipartite matching problem in the random arrival order model.<n>Our learning-augmented algorithm achieves $(1-o(1))$-consistency and $(-o(1))$-robustness.
- Score: 0.688204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the online unweighted bipartite matching problem in the random arrival order model, with $n$ offline and $n$ online vertices, in the learning-augmented setting: The algorithm is provided with untrusted predictions of the types (neighborhoods) of the online vertices. We build upon the work of Choo et al. (ICML 2024, pp. 8762-8781) who proposed an approach that uses a prefix of the arrival sequence as a sample to determine whether the predictions are close to the true arrival sequence and then either follows the predictions or uses a known baseline algorithm that ignores the predictions and is $β$-competitive. Their analysis is limited to the case that the optimal matching has size $n$, i.e., every online vertex can be matched. We generalize their approach and analysis by removing any assumptions on the size of the optimal matching while only requiring that the size of the predicted matching is at least $αn$ for any constant $0 < α\le 1$. Our learning-augmented algorithm achieves $(1-o(1))$-consistency and $(β-o(1))$-robustness. Additionally, we show that the competitive ratio degrades smoothly between consistency and robustness with increasing prediction error.
Related papers
- Multicalibration yields better matchings [18.479215073073693]
Given an imperfect predictor, a suboptimal decision rule may compensate for the induced error and thus outperform the standard optimal rule.<n>We show how to construct a specific multicalibrated predictor $hat $, with the following property.<n> Picking the best matching based on the output of $hat $ is competitive with the best decision rule in $mathcal C$ applied onto the original predictor.
arXiv Detail & Related papers (2025-11-14T15:45:07Z) - Provable Scaling Laws for the Test-Time Compute of Large Language Models [84.00141420901038]
We propose two algorithms that enjoy provable scaling laws for the test-time compute of large language models.<n>One is a two-stage knockout-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.<n>The other is a two-stage league-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Competitive strategies to use "warm start" algorithms with predictions [12.970501425097645]
We consider the problem of learning and using predictions for warm start algorithms with predictions.
In this setting, an algorithm is given an instance of a problem, and a prediction of the solution.
We give competitive guarantees against stronger benchmarks that consider a set of $k$ predictions.
arXiv Detail & Related papers (2024-05-06T17:38:20Z) - Combinatorial Stochastic-Greedy Bandit [79.1700188160944]
We propose a novelgreedy bandit (SGB) algorithm for multi-armed bandit problems when no extra information other than the joint reward of the selected set of $n$ arms at each time $tin [T]$ is observed.
SGB adopts an optimized-explore-then-commit approach and is specifically designed for scenarios with a large set of base arms.
arXiv Detail & Related papers (2023-12-13T11:08:25Z) - Mixing predictions for online metric algorithms [34.849039387367455]
We design algorithms that combine predictions and are competitive against such dynamic combinations.
Our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time.
An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
arXiv Detail & Related papers (2023-04-04T13:18:00Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Learning-Augmented Algorithms for Online TSP on the Line [4.636538620253008]
We study the online Traveling Salesman Problem (TSP) on the line augmented with machine-learned predictions.
In the classical problem, there is a stream of requests released over time along the real line.
We distinguish between the open variant and the closed one, in which we additionally require the algorithm to return to the origin after serving all requests.
arXiv Detail & Related papers (2022-06-01T17:47:26Z) - A gradient estimator via L1-randomization for online zero-order
optimization with two point feedback [93.57603470949266]
We present a novel gradient estimator based on two function evaluation and randomization.
We consider two types of assumptions on the noise of the zero-order oracle: canceling noise and adversarial noise.
We provide an anytime and completely data-driven algorithm, which is adaptive to all parameters of the problem.
arXiv Detail & Related papers (2022-05-27T11:23:57Z) - Online Optimization with Untrusted Predictions [7.895232155155041]
We study the problem of online optimization, where a decision maker must choose points in a general metric space to the sum of per-round, non-competitive hitting costs and the costs of switching between rounds.
We propose a novel algorithm, Adaptive Online Switching (AOS), and prove that, for any desired $delta 0)$competitive if predictions are perfect, it is $tildecalO(alphadelta)$ even when predictions are inaccurate.
arXiv Detail & Related papers (2022-02-07T21:08:02Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Ranking a set of objects: a graph based least-square approach [70.7866286425868]
We consider the problem of ranking $N$ objects starting from a set of noisy pairwise comparisons provided by a crowd of equal workers.
We propose a class of non-adaptive ranking algorithms that rely on a least-squares intrinsic optimization criterion for the estimation of qualities.
arXiv Detail & Related papers (2020-02-26T16:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.