Efficient Inference Without Trading-off Regret in Bandits: An Allocation
Probability Test for Thompson Sampling
- URL: http://arxiv.org/abs/2111.00137v1
- Date: Sat, 30 Oct 2021 01:47:14 GMT
- Title: Efficient Inference Without Trading-off Regret in Bandits: An Allocation
Probability Test for Thompson Sampling
- Authors: Nina Deliu, Joseph J. Williams, Sofia S. Villar
- Abstract summary: Using bandit algorithms to conduct adaptive randomised experiments can minimise regret, but it poses major challenges for statistical inference.
Recent attempts to address these challenges typically impose restrictions on the exploitative nature of the bandit$-$trading off regret$-$and require large sample sizes to ensure guarantees.
We introduce a novel hypothesis test, uniquely based on the allocation probabilities of the bandit algorithm, and without constraining its exploitative nature or requiring a minimum experimental size.
We demonstrate the regret and inferential advantages of our approach, particularly in small samples, in both extensive simulations and in a real-world experiment on mental health aspects
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Using bandit algorithms to conduct adaptive randomised experiments can
minimise regret, but it poses major challenges for statistical inference (e.g.,
biased estimators, inflated type-I error and reduced power). Recent attempts to
address these challenges typically impose restrictions on the exploitative
nature of the bandit algorithm$-$trading off regret$-$and require large sample
sizes to ensure asymptotic guarantees. However, large experiments generally
follow a successful pilot study, which is tightly constrained in its size or
duration. Increasing power in such small pilot experiments, without limiting
the adaptive nature of the algorithm, can allow promising interventions to
reach a larger experimental phase. In this work we introduce a novel hypothesis
test, uniquely based on the allocation probabilities of the bandit algorithm,
and without constraining its exploitative nature or requiring a minimum
experimental size. We characterise our $Allocation\ Probability\ Test$ when
applied to $Thompson\ Sampling$, presenting its asymptotic theoretical
properties, and illustrating its finite-sample performances compared to
state-of-the-art approaches. We demonstrate the regret and inferential
advantages of our approach, particularly in small samples, in both extensive
simulations and in a real-world experiment on mental health aspects.
Related papers
- Simulation-Based Inference for Adaptive Experiments [38.841210420855276]
Multi-arm bandit experimental designs are increasingly being adopted over standard randomized trials.<n>We propose a simulation-based approach for conducting hypothesis tests and constructing confidence intervals for arm specific means.<n>Our results show that our approach achieves the desired coverage while reducing confidence interval widths by up to 50%, with drastic improvements for arms not targeted by the design.
arXiv Detail & Related papers (2025-06-03T13:46:59Z) - Asymptotically Optimal Linear Best Feasible Arm Identification with Fixed Budget [55.938644481736446]
We introduce a novel algorithm for best feasible arm identification that guarantees an exponential decay in the error probability.<n>We validate our algorithm through comprehensive empirical evaluations across various problem instances with different levels of complexity.
arXiv Detail & Related papers (2025-06-03T02:56:26Z) - Causal Lifting of Neural Representations: Zero-Shot Generalization for Causal Inferences [56.23412698865433]
We focus on causal inferences on a target experiment with unlabeled factual outcomes, retrieved by a predictive model fine-tuned on a labeled similar experiment.
First, we show that factual outcome estimation via Empirical Risk Minimization (ERM) may fail to yield valid causal inferences on the target population.
We propose Deconfounded Empirical Risk Minimization (DERM), a new simple learning procedure minimizing the risk over a fictitious target population.
arXiv Detail & Related papers (2025-02-10T10:52:17Z) - Regret Minimization and Statistical Inference in Online Decision Making with High-dimensional Covariates [7.21848268647674]
We integrate the $varepsilon$-greedy bandit algorithm for decision-making with a hard thresholding algorithm for estimating sparse bandit parameters.
Under a margin condition, our method achieves either $O(T1/2)$ regret or classical $O(T1/2)$-consistent inference.
arXiv Detail & Related papers (2024-11-10T01:47:11Z) - Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Improved Policy Evaluation for Randomized Trials of Algorithmic Resource
Allocation [54.72195809248172]
We present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT.
We prove theoretically that such an estimator is more accurate than common estimators based on sample means.
arXiv Detail & Related papers (2023-02-06T05:17:22Z) - Finite-Time Regret of Thompson Sampling Algorithms for Exponential
Family Multi-Armed Bandits [88.21288104408556]
We study the regret of Thompson sampling (TS) algorithms for exponential family bandits, where the reward distribution is from a one-dimensional exponential family.
We propose a Thompson sampling, termed Expulli, which uses a novel sampling distribution to avoid the under-estimation of the optimal arm.
arXiv Detail & Related papers (2022-06-07T18:08:21Z) - A Unifying Theory of Thompson Sampling for Continuous Risk-Averse
Bandits [91.3755431537592]
This paper unifies the analysis of risk-averse Thompson sampling algorithms for the multi-armed bandit problem.
Using the contraction principle in the theory of large deviations, we prove novel concentration bounds for continuous risk functionals.
We show that a wide class of risk functionals as well as "nice" functions of them satisfy the continuity condition.
arXiv Detail & Related papers (2021-08-25T17:09:01Z) - Deep Bandits Show-Off: Simple and Efficient Exploration with Deep
Networks [14.178899938667161]
We introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits.
Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilon-greedy exploration.
arXiv Detail & Related papers (2021-05-10T21:45:01Z) - Weak Signal Asymptotics for Sequentially Randomized Experiments [2.28438857884398]
We study a class of sequentially randomized experiments, including those that arise in solving multi-armed bandit problems.
We show that the sample paths of a class of sequentially randomized experiments converge weakly to a diffusion limit.
We show that all sequential experiments whose randomization probabilities have a Lipschitz-continuous dependence on the observed data suffer from sub-optimal regret when reward gaps are relatively large.
arXiv Detail & Related papers (2021-01-25T02:20:20Z) - Nearly Dimension-Independent Sparse Linear Bandit over Small Action
Spaces via Best Subset Selection [71.9765117768556]
We consider the contextual bandit problem under the high dimensional linear model.
This setting finds essential applications such as personalized recommendation, online advertisement, and personalized medicine.
We propose doubly growing epochs and estimating the parameter using the best subset selection method.
arXiv Detail & Related papers (2020-09-04T04:10:39Z) - A General Theory of the Stochastic Linear Bandit and Its Applications [8.071506311915398]
We introduce a general analysis framework and a family of algorithms for the linear bandit problem.
Our new notion of optimism in expectation gives rise to a new algorithm, called sieved greedy (SG) that reduces the overexploration problem in OFUL.
In addition to proving that SG is theoretically rate optimal, our empirical simulations show that SG outperforms existing benchmarks such as greedy, OFUL, and TS.
arXiv Detail & Related papers (2020-02-12T18:54:41Z) - The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime [52.38455827779212]
We propose a novel technique for analyzing adaptive sampling called the em Simulator.
We prove the first instance-based lower bounds the top-k problem which incorporate the appropriate log-factors.
Our new analysis inspires a simple and near-optimal for the best-arm and top-k identification, the first em practical of its kind for the latter problem.
arXiv Detail & Related papers (2017-02-16T23:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.