Near-optimal inference in adaptive linear regression
- URL: http://arxiv.org/abs/2107.02266v3
- Date: Tue, 21 Mar 2023 18:18:30 GMT
- Title: Near-optimal inference in adaptive linear regression
- Authors: Koulik Khamaru, Yash Deshpande, Tor Lattimore, Lester Mackey, Martin
J. Wainwright
- Abstract summary: Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
- Score: 60.08422051718195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When data is collected in an adaptive manner, even simple methods like
ordinary least squares can exhibit non-normal asymptotic behavior. As an
undesirable consequence, hypothesis tests and confidence intervals based on
asymptotic normality can lead to erroneous results. We propose a family of
online debiasing estimators to correct these distributional anomalies in least
squares estimation. Our proposed methods take advantage of the covariance
structure present in the dataset and provide sharper estimates in directions
for which more information has accrued. We establish an asymptotic normality
property for our proposed online debiasing estimators under mild conditions on
the data collection process and provide asymptotically exact confidence
intervals. We additionally prove a minimax lower bound for the adaptive linear
regression problem, thereby providing a baseline by which to compare
estimators. There are various conditions under which our proposed estimators
achieve the minimax lower bound. We demonstrate the usefulness of our theory
via applications to multi-armed bandit, autoregressive time series estimation,
and active learning with exploration.
Related papers
- Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation
and Inference [5.924780594614676]
We show that the error of estimating a single coordinate can be enlarged by a multiple of $sqrtd$ when data are allowed to be arbitrarily adaptive.
We propose a novel estimator for single coordinate inference via solving a Two-stage Adaptive Linear Estimating equation (TALE)
arXiv Detail & Related papers (2023-10-01T00:45:09Z) - Error Reduction from Stacked Regressions [12.657895453939298]
Stacking regressions is an ensemble technique that forms linear combinations of different regression estimators to enhance predictive accuracy.
In this paper, we learn these weights analogously by minimizing a regularized version of the empirical risk subject to a nonnegativity constraint.
Thanks to an adaptive shrinkage effect, the resulting stacked estimator has strictly smaller population risk than best single estimator among them.
arXiv Detail & Related papers (2023-09-18T15:42:12Z) - Statistical Estimation Under Distribution Shift: Wasserstein
Perturbations and Minimax Theory [24.540342159350015]
We focus on Wasserstein distribution shifts, where every data point may undergo a slight perturbation.
We consider perturbations that are either independent or coordinated joint shifts across data points.
We analyze several important statistical problems, including location estimation, linear regression, and non-parametric density estimation.
arXiv Detail & Related papers (2023-08-03T16:19:40Z) - Adaptive Linear Estimating Equations [5.985204759362746]
In this paper, we propose a general method for constructing debiased estimator.
It makes use of the idea of adaptive linear estimating equations, and we establish theoretical guarantees of normality.
A salient feature of our estimator is that in the context of multi-armed bandits, our estimator retains the non-asymptotic performance.
arXiv Detail & Related papers (2023-07-14T12:55:47Z) - Semi-parametric inference based on adaptively collected data [34.56133468275712]
We construct suitably weighted estimating equations that account for adaptivity in data collection.
Our results characterize the degree of "explorability" required for normality to hold.
We illustrate our general theory with concrete consequences for various problems, including standard linear bandits and sparse generalized bandits.
arXiv Detail & Related papers (2023-03-05T00:45:32Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.