Optimal Multiple Stopping Rule for Warm-Starting Sequential Selection
- URL: http://arxiv.org/abs/2002.05160v1
- Date: Wed, 12 Feb 2020 14:04:43 GMT
- Title: Optimal Multiple Stopping Rule for Warm-Starting Sequential Selection
- Authors: Mathilde Fekom, Nicolas Vayatis, Argyris Kalogeratos
- Abstract summary: We present the Warm-starting Dynamic Thresholding algorithm, developed using dynamic programming, for a variant of the standard online selection problem.
The problem allows job positions to be either free or already occupied at the beginning of the process.
Throughout the selection process, the decision maker interviews one after the other the new candidates and reveals a quality score for each of them.
- Score: 9.625436987364909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present the Warm-starting Dynamic Thresholding algorithm,
developed using dynamic programming, for a variant of the standard online
selection problem. The problem allows job positions to be either free or
already occupied at the beginning of the process. Throughout the selection
process, the decision maker interviews one after the other the new candidates
and reveals a quality score for each of them. Based on that information, she
can (re)assign each job at most once by taking immediate and irrevocable
decisions. We relax the hard requirement of the class of dynamic programming
algorithms to perfectly know the distribution from which the scores of
candidates are drawn, by presenting extensions for the partial and
no-information cases, in which the decision maker can learn the underlying
score distribution sequentially while interviewing candidates.
Related papers
- Inferring from Logits: Exploring Best Practices for Decoding-Free Generative Candidate Selection [37.54564513506548]
Generative Language Models rely on autoregressive decoding to produce the output sequence token by token.
We introduce an evaluation of a comprehensive collection of decoding-free candidate selection approaches on a comprehensive set of tasks.
arXiv Detail & Related papers (2025-01-28T23:21:28Z) - Vector Quantization Prompting for Continual Learning [23.26682439914273]
Continual learning requires to overcome catastrophic forgetting when training a single model on a sequence of tasks.
Recent top-performing approaches are prompt-based methods that utilize a set of learnable parameters to encode task knowledge.
We propose VQ-Prompt, a prompt-based continual learning method that incorporates Vector Quantization into end-to-end training of a set of discrete prompts.
arXiv Detail & Related papers (2024-10-27T13:43:53Z) - Frugal Algorithm Selection [1.079960007119637]
There is no single algorithm that works well for all instances of a problem.
In this work, we explore reducing this cost by choosing a subset of the training instances on which to train.
We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout.
arXiv Detail & Related papers (2024-05-17T19:23:30Z) - Computing Voting Rules with Elicited Incomplete Votes [11.550634521005968]
We study voting rules that are computable by querying voters about $t m$ candidates.
For scoring rules that are computable with limited-sized queries, we give parameterized upper and lower bounds on the number of such queries.
arXiv Detail & Related papers (2024-02-16T22:17:01Z) - Improving Screening Processes via Calibrated Subset Selection [35.952153033163576]
We develop a distribution-free screening algorithm called Calibrated Subset Selection (CSS)
CSS finds near-optimal shortlists of candidates that contain a desired number of qualified candidates in expectation.
Experiments on US Census survey data validate our theoretical results and show that the shortlists provided by our algorithm are superior to those provided by several competitive baselines.
arXiv Detail & Related papers (2022-02-02T17:15:44Z) - Optimal Data Selection: An Online Distributed View [61.31708750038692]
We develop algorithms for the online and distributed version of the problem.
We show that our selection methods outperform random selection by $5-20%$.
In learning tasks on ImageNet and MNIST, we show that our selection methods outperform random selection by $5-20%$.
arXiv Detail & Related papers (2022-01-25T18:56:16Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Online Active Model Selection for Pre-trained Classifiers [72.84853880948894]
We design an online selective sampling approach that actively selects informative examples to label and outputs the best model with high probability at any round.
Our algorithm can be used for online prediction tasks for both adversarial and streams.
arXiv Detail & Related papers (2020-10-19T19:53:15Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.