Adaptive Experimentation at Scale: A Computational Framework for
Flexible Batches
- URL: http://arxiv.org/abs/2303.11582v4
- Date: Mon, 14 Aug 2023 23:33:28 GMT
- Title: Adaptive Experimentation at Scale: A Computational Framework for
Flexible Batches
- Authors: Ethan Che, Hongseok Namkoong
- Abstract summary: Motivated by practical instances involving a handful of reallocations in which outcomes are measured in batches, we develop an adaptive-driven experimentation framework.
Our main observation is that normal approximations, which are universal in statistical inference, can also guide the design of adaptive algorithms.
- Score: 7.390918770007728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard bandit algorithms that assume continual reallocation of measurement
effort are challenging to implement due to delayed feedback and
infrastructural/organizational difficulties. Motivated by practical instances
involving a handful of reallocation epochs in which outcomes are measured in
batches, we develop a computation-driven adaptive experimentation framework
that can flexibly handle batching. Our main observation is that normal
approximations, which are universal in statistical inference, can also guide
the design of adaptive algorithms. By deriving a Gaussian sequential
experiment, we formulate a dynamic program that can leverage prior information
on average rewards. Instead of the typical theory-driven paradigm, we leverage
computational tools and empirical benchmarking for algorithm development. In
particular, our empirical analysis highlights a simple yet effective algorithm,
Residual Horizon Optimization, which iteratively solves a planning problem
using stochastic gradient descent. Our approach significantly improves
statistical power over standard methods, even when compared to Bayesian bandit
algorithms (e.g., Thompson sampling) that require full distributional knowledge
of individual rewards. Overall, we expand the scope of adaptive experimentation
to settings that are difficult for standard methods, involving limited
adaptivity, low signal-to-noise ratio, and unknown reward distributions.
Related papers
- Mathematical Programming For Adaptive Experiments [7.948144726705323]
We present a mathematical programming view of adaptive experimentation that can flexibly incorporate a wide range of objectives, constraints, and statistical procedures.
We evaluate our framework on benchmarks modeled after practical challenges such as non-stationarity, personalization, multi-objectives, and constraints.
arXiv Detail & Related papers (2024-08-08T16:29:09Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z) - Adaptive Importance Sampling for Finite-Sum Optimization and Sampling
with Decreasing Step-Sizes [4.355567556995855]
We propose Avare, a simple and efficient algorithm for adaptive importance sampling for finite-sum optimization and sampling with decreasing step-sizes.
Under standard technical conditions, we show that Avare achieves $mathcalO(T2/3)$ and $mathcalO(T5/6)$ dynamic regret for SGD and SGLD respectively when run with $mathcalO(T5/6)$ step sizes.
arXiv Detail & Related papers (2021-03-23T00:28:15Z) - Adaptive Sampling for Minimax Fair Classification [40.936345085421955]
We propose an adaptive sampling algorithm based on the principle of optimism, and derive theoretical bounds on its performance.
By deriving algorithm independent lower-bounds for a specific class of problems, we show that the performance achieved by our adaptive scheme cannot be improved in general.
arXiv Detail & Related papers (2021-03-01T04:58:27Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.