Automatic Debiased Machine Learning for Dynamic Treatment Effects and
General Nested Functionals
- URL: http://arxiv.org/abs/2203.13887v5
- Date: Tue, 20 Jun 2023 22:00:45 GMT
- Title: Automatic Debiased Machine Learning for Dynamic Treatment Effects and
General Nested Functionals
- Authors: Victor Chernozhukov, Whitney Newey, Rahul Singh, Vasilis Syrgkanis
- Abstract summary: We extend the idea of automated debiased machine learning to the dynamic treatment regime and more generally to nested functionals.
We show that the multiply robust formula for the dynamic treatment regime with discrete treatments can be re-stated in terms of a Riesz representer characterization of nested mean regressions.
- Score: 23.31865419578237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We extend the idea of automated debiased machine learning to the dynamic
treatment regime and more generally to nested functionals. We show that the
multiply robust formula for the dynamic treatment regime with discrete
treatments can be re-stated in terms of a recursive Riesz representer
characterization of nested mean regressions. We then apply a recursive Riesz
representer estimation learning algorithm that estimates de-biasing corrections
without the need to characterize how the correction terms look like, such as
for instance, products of inverse probability weighting terms, as is done in
prior work on doubly robust estimation in the dynamic regime. Our approach
defines a sequence of loss minimization problems, whose minimizers are the
mulitpliers of the de-biasing correction, hence circumventing the need for
solving auxiliary propensity models and directly optimizing for the mean
squared error of the target de-biasing correction. We provide further
applications of our approach to estimation of dynamic discrete choice models
and estimation of long-term effects with surrogates.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Automatic debiasing of neural networks via moment-constrained learning [0.0]
Naively learning the regression function and taking a sample mean of the target functional results in biased estimators.
We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in automatic debiasing.
arXiv Detail & Related papers (2024-09-29T20:56:54Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - End-to-end reconstruction meets data-driven regularization for inverse
problems [2.800608984818919]
We propose an unsupervised approach for learning end-to-end reconstruction operators for ill-posed inverse problems.
The proposed method combines the classical variational framework with iterative unrolling.
We demonstrate with the example of X-ray computed tomography (CT) that our approach outperforms state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2021-06-07T12:05:06Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Sparse recovery by reduced variance stochastic approximation [5.672132510411465]
We discuss application of iterative quadratic optimization routines to the problem of sparse signal recovery from noisy observation.
We show how one can straightforwardly enhance reliability of the corresponding solution by using Median-of-Means like techniques.
arXiv Detail & Related papers (2020-06-11T12:31:20Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Multivariate Functional Regression via Nested Reduced-Rank
Regularization [2.730097437607271]
We propose a nested reduced-rank regression (NRRR) approach in fitting regression model with multivariate functional responses and predictors.
We show through non-asymptotic analysis that NRRR can achieve at least a comparable error rate to that of the reduced-rank regression.
We apply NRRR in an electricity demand problem, to relate the trajectories of the daily electricity consumption with those of the daily temperatures.
arXiv Detail & Related papers (2020-03-10T14:58:54Z) - Double/Debiased Machine Learning for Dynamic Treatment Effects via
g-Estimation [25.610534178373065]
We consider the estimation of treatment effects in settings when multiple treatments are assigned over time.
We propose an extension of the double/debiased machine learning framework to estimate the dynamic effects of treatments.
arXiv Detail & Related papers (2020-02-17T22:32:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.