Stochastic Multi-Armed Bandits with Control Variates
- URL: http://arxiv.org/abs/2105.03962v1
- Date: Sun, 9 May 2021 15:40:09 GMT
- Title: Stochastic Multi-Armed Bandits with Control Variates
- Authors: Arun Verma, Manjesh K. Hanawal
- Abstract summary: We study a new variant of the multi-armed bandits problem, where the learner has access to auxiliary information about the arms.
The auxiliary information is correlated with the arm rewards.
We develop an algorithm named UCB-CV that uses improved estimates.
- Score: 6.548580592686076
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper studies a new variant of the stochastic multi-armed bandits
problem, where the learner has access to auxiliary information about the arms.
The auxiliary information is correlated with the arm rewards, which we treat as
control variates. In many applications, the arm rewards are a function of some
exogenous values, whose mean value is known a priori from historical data and
hence can be used as control variates. We use the control variates to obtain
mean estimates with smaller variance and tighter confidence bounds. We then
develop an algorithm named UCB-CV that uses improved estimates. We characterize
the regret bounds in terms of the correlation between the rewards and control
variates. The experiments on synthetic data validate the performance guarantees
of our proposed algorithm.
Related papers
- Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Variance-Aware Regret Bounds for Stochastic Contextual Dueling Bandits [56.457634640638254]
This paper studies the problem of contextual dueling bandits, where the binary comparison of dueling arms is generated from a generalized linear model (GLM)
We propose a new SupLinUCB-type algorithm that enjoys computational efficiency and a variance-aware regret bound $tilde Obig(dsqrtsum_t=1Tsigma_t2 + dbig)$.
Our regret bound naturally aligns with the intuitive expectation in scenarios where the comparison is deterministic, the algorithm only suffers from an $tilde O(d)$ regret.
arXiv Detail & Related papers (2023-10-02T08:15:52Z) - Fixed-Budget Best-Arm Identification with Heterogeneous Reward Variances [12.00630538470713]
We study the problem of best-arm identification (BAI) in the fixed-budget setting with heterogeneous reward variances.
We propose two variance-adaptive BAI algorithms for this setting: SHVar for known reward variances and SHAdaVar for unknown reward variances.
arXiv Detail & Related papers (2023-06-13T05:41:38Z) - Learning the Pareto Front Using Bootstrapped Observation Samples [17.519167857253404]
We propose an algorithm to identify a set of arms with undominated mean reward vectors.
The sample complexity of our proposed algorithm is optimal up to a logarithmic factor.
Key contribution is a new estimator that in every round updates the estimate for the unknown parameter along multiple context directions.
arXiv Detail & Related papers (2023-05-31T18:15:09Z) - Robust Contextual Linear Bandits [19.85979744859435]
This paper studies a common form of misspecification, an inter-arm heterogeneity that is not captured by context.
We develop two efficient bandit algorithms for our setting: a UCB algorithm called RoLinUCB and a posterior-sampling algorithm called RoLinTS.
arXiv Detail & Related papers (2022-10-26T05:18:09Z) - Control Variates for Slate Off-Policy Evaluation [112.35528337130118]
We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions.
We obtain new estimators with risk improvement guarantees over both the PI and self-normalized PI estimators.
arXiv Detail & Related papers (2021-06-15T06:59:53Z) - ARMS: Antithetic-REINFORCE-Multi-Sample Gradient for Binary Variables [60.799183326613395]
Antithetic REINFORCE-based Multi-Sample gradient estimator.
ARMS uses a copula to generate any number of mutually antithetic samples.
We evaluate ARMS on several datasets for training generative models, and our experimental results show that it outperforms competing methods.
arXiv Detail & Related papers (2021-05-28T23:19:54Z) - Continuous Mean-Covariance Bandits [39.820490484375156]
We propose a novel Continuous Mean-Covariance Bandit model to take into account option correlation.
In CMCB, there is a learner who sequentially chooses weight vectors on given options and observes random feedback according to the decisions.
We propose novel algorithms with optimal regrets (within logarithmic factors) and provide matching lower bounds to validate their optimalities.
arXiv Detail & Related papers (2021-02-24T06:37:05Z) - Optimal Off-Policy Evaluation from Multiple Logging Policies [77.62012545592233]
We study off-policy evaluation from multiple logging policies, each generating a dataset of fixed size, i.e., stratified sampling.
We find the OPE estimator for multiple loggers with minimum variance for any instance, i.e., the efficient one.
arXiv Detail & Related papers (2020-10-21T13:43:48Z) - A Framework for Sample Efficient Interval Estimation with Control
Variates [94.32811054797148]
We consider the problem of estimating confidence intervals for the mean of a random variable.
Under certain conditions, we show improved efficiency compared to existing estimation algorithms.
arXiv Detail & Related papers (2020-06-18T05:42:30Z) - On Under-exploration in Bandits with Mean Bounds from Confounded Data [41.09427248084205]
We study a variant of the multi-armed bandit problem where side information in the form of bounds on the mean of each arm is provided.
We develop the novel non-optimistic Global Under-Explore (GLUE) algorithm which uses the provided mean bounds.
We show that mean bounds can be inferred naturally from such logs and can thus be used to improve the learning process.
arXiv Detail & Related papers (2020-02-19T19:36:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.