Bounded Regret for Finitely Parameterized Multi-Armed Bandits
- URL: http://arxiv.org/abs/2003.01328v5
- Date: Sat, 7 Nov 2020 20:28:14 GMT
- Title: Bounded Regret for Finitely Parameterized Multi-Armed Bandits
- Authors: Kishan Panaganti and Dileep Kalathil
- Abstract summary: We propose an algorithm that is simple and easy to implement.
We show that the FP-UCB algorithm achieves a logarithmic regret.
We also validate the superior performance of the FP-UCB algorithm through extensive numerical simulations.
- Score: 3.8073142980733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of finitely parameterized multi-armed bandits where
the model of the underlying stochastic environment can be characterized based
on a common unknown parameter. The true parameter is unknown to the learning
agent. However, the set of possible parameters, which is finite, is known a
priori. We propose an algorithm that is simple and easy to implement, which we
call Finitely Parameterized Upper Confidence Bound (FP-UCB) algorithm, which
uses the information about the underlying parameter set for faster learning. In
particular, we show that the FP-UCB algorithm achieves a bounded regret under
some structural condition on the underlying parameter set. We also show that,
if the underlying parameter set does not satisfy the necessary structural
condition, the FP-UCB algorithm achieves a logarithmic regret, but with a
smaller preceding constant compared to the standard UCB algorithm. We also
validate the superior performance of the FP-UCB algorithm through extensive
numerical simulations.
Related papers
- UCB algorithms for multi-armed bandits: Precise regret and adaptive inference [6.349503549199403]
Upper Confidence Bound (UCB) algorithms are a widely-used class of sequential algorithms for the $K$-armed bandit problem.
We show that the Lai-Robbins regret formula is exact if and only if the sub-optimality gaps exceed the order $sigmasqrtKlog T/T$.
We also show that its maximal regret deviates from the minimax regret by a logarithmic factor, and therefore settling its strict minimax optimality in the negative.
arXiv Detail & Related papers (2024-12-09T01:14:02Z) - Sub-linear Regret in Adaptive Model Predictive Control [56.705978425244496]
We present STT-MPC (Self-Tuning Tube-based Model Predictive Control), an online oracle that combines the certainty-equivalence principle and polytopic tubes.
We analyze the regret of the algorithm, when compared to an algorithm initially aware of the system dynamics.
arXiv Detail & Related papers (2023-10-07T15:07:10Z) - Surrogate-based Autotuning for Randomized Sketching Algorithms in Regression Problems [34.59249597626943]
We show how a surrogate-based autotuning approach can be used to address fundamental problems of parameter selection in RandNLA algorithms.
Our results show that our surrogate-based autotuning approach can achieve near-optimal performance with much less tuning cost than a random search.
arXiv Detail & Related papers (2023-08-30T02:50:54Z) - Provably Efficient UCB-type Algorithms For Learning Predictive State
Representations [55.00359893021461]
The sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs)
This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models.
In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational tractability, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
arXiv Detail & Related papers (2023-07-01T18:35:21Z) - Unsupervised Learning of Initialization in Deep Neural Networks via
Maximum Mean Discrepancy [74.34895342081407]
We propose an unsupervised algorithm to find good initialization for input data.
We first notice that each parameter configuration in the parameter space corresponds to one particular downstream task of d-way classification.
We then conjecture that the success of learning is directly related to how diverse downstream tasks are in the vicinity of the initial parameters.
arXiv Detail & Related papers (2023-02-08T23:23:28Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Fast Perturbative Algorithm Configurators [0.0]
We prove a linear lower bound on the expected time to optimise any parameter problem for ParamRLS and ParamILS.
We propose a harmonic mutation operator for perturbative algorithms in polylogarithmic time for unimodal and approximately unimodal.
An experimental analysis confirms the superiority of the approach in practice for a number of configuration scenarios.
arXiv Detail & Related papers (2020-07-07T10:48:32Z) - Refined bounds for algorithm configuration: The knife-edge of dual class
approximability [94.83809668933021]
We investigate how large should a training set be to ensure that a parameter's average metrics performance over the training set is close to its expected, future performance.
We show that if this approximation holds under the L-infinity norm, we can provide strong sample complexity bounds.
We empirically evaluate our bounds in the context of integer programming, one of the most powerful tools in computer science.
arXiv Detail & Related papers (2020-06-21T15:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.