Tolerance and Prediction Intervals for Non-normal Models
- URL: http://arxiv.org/abs/2011.11583v5
- Date: Mon, 17 Jan 2022 13:43:38 GMT
- Title: Tolerance and Prediction Intervals for Non-normal Models
- Authors: Geoffrey S Johnson
- Abstract summary: A prediction interval covers a future observation from a random process in repeated sampling.
A tolerance interval covers a population percentile in repeated sampling and is often based on a pivotal quantity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A prediction interval covers a future observation from a random process in
repeated sampling, and is typically constructed by identifying a pivotal
quantity that is also an ancillary statistic. Analogously, a tolerance interval
covers a population percentile in repeated sampling and is often based on a
pivotal quantity. One approach we consider in non-normal models leverages a
link function resulting in a pivotal quantity that is approximately normally
distributed. In settings where this normal approximation does not hold we
consider a second approach for tolerance and prediction based on a confidence
interval for the mean. These methods are intuitive, simple to implement, have
proper operating characteristics, and are computationally efficient compared to
Bayesian, re-sampling, and machine learning methods. This is demonstrated in
the context of multi-site clinical trial recruitment with staggered site
initiation, real-world time on treatment, and end-of-study success for a
clinical endpoint.
Related papers
- Relaxed Quantile Regression: Prediction Intervals for Asymmetric Noise [51.87307904567702]
Quantile regression is a leading approach for obtaining such intervals via the empirical estimation of quantiles in the distribution of outputs.
We propose Relaxed Quantile Regression (RQR), a direct alternative to quantile regression based interval construction that removes this arbitrary constraint.
We demonstrate that this added flexibility results in intervals with an improvement in desirable qualities.
arXiv Detail & Related papers (2024-06-05T13:36:38Z) - Multi-CATE: Multi-Accurate Conditional Average Treatment Effect Estimation Robust to Unknown Covariate Shifts [12.289361708127876]
We use methodology for learning multi-accurate predictors to post-process CATE T-learners.
We show how this approach can combine (large) confounded observational and (smaller) randomized datasets.
arXiv Detail & Related papers (2024-05-28T14:12:25Z) - Simultaneous Inference for Local Structural Parameters with Random Forests [19.014535120129338]
We construct simultaneous confidence intervals for solutions to conditional moment equations.
We obtain several new order-explicit results on the concentration and normal approximation of high-dimensional U.S.
As a by-product, we obtain several new order-explicit results on the concentration and normal approximation of high-dimensional U.S.
arXiv Detail & Related papers (2024-05-13T15:46:11Z) - A Statistical Model for Predicting Generalization in Few-Shot
Classification [6.158812834002346]
We introduce a Gaussian model of the feature distribution to predict the generalization error.
We show that our approach outperforms alternatives such as the leave-one-out cross-validation strategy.
arXiv Detail & Related papers (2022-12-13T10:21:15Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Counterfactual inference for sequential experiments [17.817769460838665]
We consider after-study statistical inference for sequentially designed experiments wherein multiple units are assigned treatments for multiple time points.
Our goal is to provide inference guarantees for the counterfactual mean at the smallest possible scale.
We illustrate our theory via several simulations and a case study involving data from a mobile health clinical trial HeartSteps.
arXiv Detail & Related papers (2022-02-14T17:24:27Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Sequential Deconfounding for Causal Inference with Unobserved
Confounders [18.586616164230566]
We develop the Sequential Deconfounder, a method that enables estimating individualized treatment effects over time.
This is the first deconfounding method that can be used in a general sequential setting.
We prove that using our method yields unbiased estimates of individualized treatment responses over time.
arXiv Detail & Related papers (2021-04-16T09:56:39Z) - Performance metrics for intervention-triggering prediction models do not
reflect an expected reduction in outcomes from using the model [71.9860741092209]
Clinical researchers often select among and evaluate risk prediction models.
Standard metrics calculated from retrospective data are only related to model utility under certain assumptions.
When predictions are delivered repeatedly throughout time, the relationship between standard metrics and utility is further complicated.
arXiv Detail & Related papers (2020-06-02T16:26:49Z) - Batch Stationary Distribution Estimation [98.18201132095066]
We consider the problem of approximating the stationary distribution of an ergodic Markov chain given a set of sampled transitions.
We propose a consistent estimator that is based on recovering a correction ratio function over the given data.
arXiv Detail & Related papers (2020-03-02T09:10:01Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.