Measurement Error in Nutritional Epidemiology: A Survey
- URL: http://arxiv.org/abs/2004.06448v2
- Date: Mon, 13 Jul 2020 09:35:18 GMT
- Title: Measurement Error in Nutritional Epidemiology: A Survey
- Authors: Huimin Peng
- Abstract summary: This article reviews bias-correction models for measurement error of exposure variables in the field of nutritional epidemiology.
Due to the influence of measurement error, inference of parameter estimate is conservative and confidence interval of the slope parameter is too narrow.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article reviews bias-correction models for measurement error of exposure
variables in the field of nutritional epidemiology. Measurement error usually
attenuates estimated slope towards zero. Due to the influence of measurement
error, inference of parameter estimate is conservative and confidence interval
of the slope parameter is too narrow. Bias-correction in estimators and
confidence intervals are of primary interest. We review the following
bias-correction models: regression calibration methods, likelihood based
models, missing data models, simulation based methods, nonparametric models and
sampling based procedures.
Related papers
- Debiased high-dimensional regression calibration for errors-in-variables log-contrast models [0.999726509256195]
Motivated by the challenges in analyzing gut microbiome and metagenomic data, this work aims to tackle the issue of measurement errors in high-dimensional regression models.
This paper marks a pioneering effort in conducting statistical inference on high-dimensional compositional data affected by mismeasured or contaminated data.
arXiv Detail & Related papers (2024-09-11T18:47:28Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Identifiable causal inference with noisy treatment and no side information [6.432072145009342]
This study proposes a model that assumes a continuous treatment variable that is inaccurately measured.
We prove that our model's causal effect estimates are identifiable, even without side information and knowledge of the measurement error variance.
Our work extends the range of applications in which reliable causal inference can be conducted.
arXiv Detail & Related papers (2023-06-18T18:38:10Z) - Distribution-Free Model-Agnostic Regression Calibration via
Nonparametric Methods [9.662269016653296]
We consider an individual calibration objective for characterizing the quantiles of the prediction model.
Existing methods have been largely and lack of statistical guarantee in terms of individual calibration.
We propose simple nonparametric calibration methods that are agnostic of the underlying prediction model.
arXiv Detail & Related papers (2023-05-20T21:31:51Z) - Calibration of Neural Networks [77.34726150561087]
This paper presents a survey of confidence calibration problems in the context of neural networks.
We analyze problem statement, calibration definitions, and different approaches to evaluation.
Empirical experiments cover various datasets and models, comparing calibration methods according to different criteria.
arXiv Detail & Related papers (2023-03-19T20:27:51Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Prediction Errors for Penalized Regressions based on Generalized
Approximate Message Passing [0.0]
We derive the forms of estimators for the prediction errors: $C_p$ criterion, information criteria, and leave-one-out cross validation (LOOCV) error.
In the framework of GAMP, we show that the information criteria can be expressed by using the variance of the estimates.
arXiv Detail & Related papers (2022-06-26T09:42:39Z) - Why Calibration Error is Wrong Given Model Uncertainty: Using Posterior
Predictive Checks with Deep Learning [0.0]
We show how calibration error and its variants are almost always incorrect to use given model uncertainty.
We show how this mistake can lead to trust in bad models and mistrust in good models.
arXiv Detail & Related papers (2021-12-02T18:26:30Z) - Increasing the efficiency of randomized trial estimates via linear
adjustment for a prognostic score [59.75318183140857]
Estimating causal effects from randomized experiments is central to clinical research.
Most methods for historical borrowing achieve reductions in variance by sacrificing strict type-I error rate control.
arXiv Detail & Related papers (2020-12-17T21:10:10Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.