Parallelizing Contextual Linear Bandits
- URL: http://arxiv.org/abs/2105.10590v1
- Date: Fri, 21 May 2021 22:22:02 GMT
- Title: Parallelizing Contextual Linear Bandits
- Authors: Jeffrey Chan, Aldo Pacchiano, Nilesh Tripuraneni, Yun S. Song, Peter
Bartlett, Michael I. Jordan
- Abstract summary: We present a family of (parallel) contextual linear bandit algorithms, whose regret is nearly identical to their perfectly sequential counterparts.
We also present an empirical evaluation of these parallel algorithms in several domains, including materials discovery and biological sequence design problems.
- Score: 82.65675585004448
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard approaches to decision-making under uncertainty focus on sequential
exploration of the space of decisions. However, \textit{simultaneously}
proposing a batch of decisions, which leverages available resources for
parallel experimentation, has the potential to rapidly accelerate exploration.
We present a family of (parallel) contextual linear bandit algorithms, whose
regret is nearly identical to their perfectly sequential counterparts -- given
access to the same total number of oracle queries -- up to a lower-order
"burn-in" term that is dependent on the context-set geometry. We provide
matching information-theoretic lower bounds on parallel regret performance to
establish our algorithms are asymptotically optimal in the time horizon.
Finally, we also present an empirical evaluation of these parallel algorithms
in several domains, including materials discovery and biological sequence
design problems, to demonstrate the utility of parallelized bandits in
practical settings.
Related papers
- Effectively Leveraging Momentum Terms in Stochastic Line Search Frameworks for Fast Optimization of Finite-Sum Problems [0.5156484100374059]
We explore the relationship between recent line search approaches for deep optimization in the overparametrized regime and momentum directions.
We introduce algorithmic that exploits a mix of data persistency, conjugateient type rules for the definition of the momentum parameter.
The resulting algorithm is empirically shown to outperform other popular methods.
arXiv Detail & Related papers (2024-11-11T16:26:33Z) - Nearest Neighbour with Bandit Feedback [4.9094025705644695]
Our algorithm handles the fully adversarial setting in which no assumptions at all are made about the data-generation process.
We give generic regret for our algorithm and further analyse them when applied to the bandit problem in euclidean space.
arXiv Detail & Related papers (2023-06-23T20:09:01Z) - An Optimal Algorithm for the Real-Valued Combinatorial Pure Exploration
of Multi-Armed Bandit [65.268245109828]
We study the real-valued pure exploration problem in the multi-armed bandit (R-CPE-MAB)
Existing methods in the R-CPE-MAB can be seen as a special case of the so-called transductive linear bandits.
We propose an algorithm named the gap-based exploration (CombGapE) algorithm, whose sample complexity matches the lower bound.
arXiv Detail & Related papers (2023-06-15T15:37:31Z) - Experimental Design for Regret Minimization in Linear Bandits [19.8309784360219]
We propose a novel design-based algorithm to minimize regret in online linear and bandits.
We provide state-of-the-art finite time regret guarantees and show that our algorithm can be applied in both the bandit and semi-bandit feedback regime.
arXiv Detail & Related papers (2020-11-01T17:59:19Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Instance-Dependent Complexity of Contextual Bandits and Reinforcement
Learning: A Disagreement-Based Perspective [104.67295710363679]
In the classical multi-armed bandit problem, instance-dependent algorithms attain improved performance on "easy" problems with a gap between the best and second-best arm.
We introduce a family of complexity measures that are both sufficient and necessary to obtain instance-dependent regret bounds.
We then introduce new oracle-efficient algorithms which adapt to the gap whenever possible, while also attaining the minimax rate in the worst case.
arXiv Detail & Related papers (2020-10-07T01:33:06Z) - Structure Adaptive Algorithms for Stochastic Bandits [22.871155520200773]
We study reward maximisation in a class of structured multi-armed bandit problems.
The mean rewards of arms satisfy some given structural constraints.
We develop algorithms from instance-dependent lower-bounds using iterative saddle-point solvers.
arXiv Detail & Related papers (2020-07-02T08:59:54Z) - Beyond Worst-Case Analysis in Stochastic Approximation: Moment
Estimation Improves Instance Complexity [58.70807593332932]
We study oracle complexity of gradient based methods for approximation problems.
We focus on instance-dependent complexity instead of worst case complexity.
Our proposed algorithm and its analysis provide a theoretical justification for the success of moment estimation.
arXiv Detail & Related papers (2020-06-08T09:25:47Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Sequential Batch Learning in Finite-Action Linear Contextual Bandits [40.01661188919779]
We study the sequential batch learning problem in linear contextual bandits with finite action sets.
This problem provides a finer-grained formulation of many personalized sequential decision making problems in practical applications.
arXiv Detail & Related papers (2020-04-14T06:47:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.