Nonparametric estimation of a covariate-adjusted counterfactual
treatment regimen response curve
- URL: http://arxiv.org/abs/2309.16099v1
- Date: Thu, 28 Sep 2023 01:46:24 GMT
- Title: Nonparametric estimation of a covariate-adjusted counterfactual
treatment regimen response curve
- Authors: Ashkan Ertefaie, Luke Duttweiler, Brent A. Johnson and Mark J. van der
Laan
- Abstract summary: Flexible estimation of the mean outcome under a treatment regimen is a key step toward personalized medicine.
We propose an inverse probability weighted nonparametrically efficient estimator of the smoothed regimen-response curve function.
Some finite-sample properties are explored with simulations.
- Score: 2.7446241148152253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Flexible estimation of the mean outcome under a treatment regimen (i.e.,
value function) is the key step toward personalized medicine. We define our
target parameter as a conditional value function given a set of baseline
covariates which we refer to as a stratum based value function. We focus on
semiparametric class of decision rules and propose a sieve based nonparametric
covariate adjusted regimen-response curve estimator within that class. Our work
contributes in several ways. First, we propose an inverse probability weighted
nonparametrically efficient estimator of the smoothed regimen-response curve
function. We show that asymptotic linearity is achieved when the nuisance
functions are undersmoothed sufficiently. Asymptotic and finite sample criteria
for undersmoothing are proposed. Second, using Gaussian process theory, we
propose simultaneous confidence intervals for the smoothed regimen-response
curve function. Third, we provide consistency and convergence rate for the
optimizer of the regimen-response curve estimator; this enables us to estimate
an optimal semiparametric rule. The latter is important as the optimizer
corresponds with the optimal dynamic treatment regimen. Some finite-sample
properties are explored with simulations.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Likelihood Ratio Confidence Sets for Sequential Decision Making [51.66638486226482]
We revisit the likelihood-based inference principle and propose to use likelihood ratios to construct valid confidence sequences.
Our method is especially suitable for problems with well-specified likelihoods.
We show how to provably choose the best sequence of estimators and shed light on connections to online convex optimization.
arXiv Detail & Related papers (2023-11-08T00:10:21Z) - Inference on Optimal Dynamic Policies via Softmax Approximation [27.396891119011215]
We show that a simple soft-max approximation to the optimal treatment regime can achieve valid inference on the truly optimal regime.
Our work combines techniques from semi-parametric inference and $g$-estimation, together with an appropriate array central limit theorem.
arXiv Detail & Related papers (2023-03-08T07:42:47Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - Generalized Kernel Ridge Regression for Causal Inference with
Missing-at-Random Sample Selection [3.398662563413433]
I propose kernel ridge regression estimators for nonparametric dose response curves and semiparametric treatment effects.
For the discrete treatment case, I prove root-n consistency, Gaussian approximation, and semiparametric efficiency.
arXiv Detail & Related papers (2021-11-09T17:10:49Z) - Optimal prediction for kernel-based semi-functional linear regression [5.827901300943599]
We establish minimax optimal rates of convergence for prediction in a semi-functional linear model.
Our results reveal that the smoother functional component can be learned with the minimax rate as if the nonparametric component were known.
arXiv Detail & Related papers (2021-10-29T04:55:44Z) - Implicit Rate-Constrained Optimization of Non-decomposable Objectives [37.43791617018009]
We consider a family of constrained optimization problems arising in machine learning.
Our key idea is to formulate a rate-constrained optimization that expresses the threshold parameter as a function of the model parameters.
We show how the resulting optimization problem can be solved using standard gradient based methods.
arXiv Detail & Related papers (2021-07-23T00:04:39Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.