An Upper Confidence Bound Approach to Estimating the Maximum Mean
- URL: http://arxiv.org/abs/2408.04179v1
- Date: Thu, 8 Aug 2024 02:53:09 GMT
- Title: An Upper Confidence Bound Approach to Estimating the Maximum Mean
- Authors: Zhang Kun, Liu Guangwu, Shi Wen,
- Abstract summary: We study estimation of the maximum mean using an upper confidence bound (UCB) approach.
We establish statistical guarantees, including strong consistency, mean squared errors, and central limit theorems (CLTs) for both estimators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Estimating the maximum mean finds a variety of applications in practice. In this paper, we study estimation of the maximum mean using an upper confidence bound (UCB) approach where the sampling budget is adaptively allocated to one of the systems. We study in depth the existing grand average (GA) estimator, and propose a new largest-size average (LSA) estimator. Specifically, we establish statistical guarantees, including strong consistency, asymptotic mean squared errors, and central limit theorems (CLTs) for both estimators, which are new to the literature. We show that LSA is preferable over GA, as the bias of the former decays at a rate much faster than that of the latter when sample size increases. By using the CLTs, we further construct asymptotically valid confidence intervals for the maximum mean, and propose a single hypothesis test for a multiple comparison problem with application to clinical trials. Statistical efficiency of the resulting point and interval estimates and the proposed single hypothesis test is demonstrated via numerical examples.
Related papers
- HAVER: Instance-Dependent Error Bounds for Maximum Mean Estimation and Applications to Q-Learning [11.026588768210601]
We study the problem of estimating the emphvalue of the largest mean among $K$ distributions via samples from them.
We propose a novel algorithm called HAVER and analyze its mean squared error.
arXiv Detail & Related papers (2024-11-01T07:05:11Z) - Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Statistical Inference in Tensor Completion: Optimal Uncertainty Quantification and Statistical-to-Computational Gaps [7.174572371800217]
This paper presents a simple yet efficient method for statistical inference of tensor linear forms using incomplete and noisy observations.
It is suitable for various statistical inference tasks, including constructing confidence intervals, inference under heteroskedastic and sub-exponential noise, and simultaneous testing.
arXiv Detail & Related papers (2024-10-15T03:09:52Z) - Semiparametric Efficient Inference in Adaptive Experiments [29.43493007296859]
We consider the problem of efficient inference of the Average Treatment Effect in a sequential experiment where the policy governing the assignment of subjects to treatment or control can change over time.
We first provide a central limit theorem for the Adaptive Augmented Inverse-Probability Weighted estimator, which is semi efficient, under weaker assumptions than those previously made in the literature.
We then consider sequential inference setting, deriving both propensity and nonasymptotic confidence sequences that are considerably tighter than previous methods.
arXiv Detail & Related papers (2023-11-30T06:25:06Z) - Overlapping Batch Confidence Intervals on Statistical Functionals
Constructed from Time Series: Application to Quantiles, Optimization, and
Estimation [5.068678962285631]
We propose a confidence interval procedure for statistical functionals constructed using data from a stationary time series.
The OBx limits, certain functionals of the Wiener process parameterized by the size of the batches and the extent of their overlap, form the essential machinery for characterizing dependence.
arXiv Detail & Related papers (2023-07-17T16:21:48Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Minimax Off-Policy Evaluation for Multi-Armed Bandits [58.7013651350436]
We study the problem of off-policy evaluation in the multi-armed bandit model with bounded rewards.
We develop minimax rate-optimal procedures under three settings.
arXiv Detail & Related papers (2021-01-19T18:55:29Z) - Maximum Entropy competes with Maximum Likelihood [0.0]
Max entropy (MAXENT) method has a large number of applications in theoretical and applied machine learning.
We show that MAXENT applies in sparse data regimes, but needs specific types of prior information.
In particular, MAXENT can outperform the optimally regularized ML provided that there are prior rank correlations between the estimated random quantity and its probabilities.
arXiv Detail & Related papers (2020-12-17T07:44:22Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.