Adaptive debiased machine learning using data-driven model selection
techniques
- URL: http://arxiv.org/abs/2307.12544v1
- Date: Mon, 24 Jul 2023 06:16:17 GMT
- Title: Adaptive debiased machine learning using data-driven model selection
techniques
- Authors: Lars van der Laan, Marco Carone, Alex Luedtke, Mark van der Laan
- Abstract summary: Adaptive Debiased Machine Learning (ADML) is a nonbiased framework that combines data-driven model selection and debiased machine learning techniques.
ADML avoids the bias introduced by model misspecification and remains free from the restrictions of parametric and semi models.
We provide a broad class of ADML estimators for estimating the average treatment effect in adaptive partially linear regression models.
- Score: 0.5735035463793007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Debiased machine learning estimators for nonparametric inference of smooth
functionals of the data-generating distribution can suffer from excessive
variability and instability. For this reason, practitioners may resort to
simpler models based on parametric or semiparametric assumptions. However, such
simplifying assumptions may fail to hold, and estimates may then be biased due
to model misspecification. To address this problem, we propose Adaptive
Debiased Machine Learning (ADML), a nonparametric framework that combines
data-driven model selection and debiased machine learning techniques to
construct asymptotically linear, adaptive, and superefficient estimators for
pathwise differentiable functionals. By learning model structure directly from
data, ADML avoids the bias introduced by model misspecification and remains
free from the restrictions of parametric and semiparametric models. While they
may exhibit irregular behavior for the target parameter in a nonparametric
statistical model, we demonstrate that ADML estimators provides regular and
locally uniformly valid inference for a projection-based oracle parameter.
Importantly, this oracle parameter agrees with the original target parameter
for distributions within an unknown but correctly specified oracle statistical
submodel that is learned from the data. This finding implies that there is no
penalty, in a local asymptotic sense, for conducting data-driven model
selection compared to having prior knowledge of the oracle submodel and oracle
parameter. To demonstrate the practical applicability of our theory, we provide
a broad class of ADML estimators for estimating the average treatment effect in
adaptive partially linear regression models.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Overparameterized Multiple Linear Regression as Hyper-Curve Fitting [0.0]
It is proven that a linear model will produce exact predictions even in the presence of nonlinear dependencies that violate the model assumptions.
The hyper-curve approach is especially suited for the regularization of problems with noise in predictor variables and can be used to remove noisy and "improper" predictors from the model.
arXiv Detail & Related papers (2024-04-11T15:43:11Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - Variable Importance Matching for Causal Inference [73.25504313552516]
We describe a general framework called Model-to-Match that achieves these goals.
Model-to-Match uses variable importance measurements to construct a distance metric.
We operationalize the Model-to-Match framework with LASSO.
arXiv Detail & Related papers (2023-02-23T00:43:03Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Assumption-lean inference for generalised linear model parameters [0.0]
We propose nonparametric definitions of main effect estimands and effect modification estimands.
These reduce to standard main effect and effect modification parameters in generalised linear models when these models are correctly specified.
We achieve an assumption-lean inference for these estimands.
arXiv Detail & Related papers (2020-06-15T13:49:48Z) - Nonparametric inverse probability weighted estimators based on the
highly adaptive lasso [0.966840768820136]
Inparametric inverse probability weighted estimators are known to be inefficient and suffer from the curse of dimensionality.
We propose a class of nonparametric inverse probability weighted estimators in which the weighting mechanism is estimated via undersmoothing of the highly adaptive lasso.
Our developments have broad implications for the construction of efficient inverse probability weighted estimators in large statistical models and a variety of problem settings.
arXiv Detail & Related papers (2020-05-22T17:49:46Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z) - Nonparametric Estimation in the Dynamic Bradley-Terry Model [69.70604365861121]
We develop a novel estimator that relies on kernel smoothing to pre-process the pairwise comparisons over time.
We derive time-varying oracle bounds for both the estimation error and the excess risk in the model-agnostic setting.
arXiv Detail & Related papers (2020-02-28T21:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.