Estimation of Local Average Treatment Effect by Data Combination
- URL: http://arxiv.org/abs/2109.05175v1
- Date: Sat, 11 Sep 2021 03:51:48 GMT
- Title: Estimation of Local Average Treatment Effect by Data Combination
- Authors: Kazuhiko Shinoda and Takahiro Hoshino
- Abstract summary: It is important to estimate the local average treatment effect (LATE) when compliance with a treatment assignment is incomplete.
Previously proposed methods for LATE estimation required all relevant variables to be jointly observed in a single dataset.
We propose a weighted least squares estimator that enables simpler model selection by avoiding the minimax objective formulation.
- Score: 3.655021726150368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is important to estimate the local average treatment effect (LATE) when
compliance with a treatment assignment is incomplete. The previously proposed
methods for LATE estimation required all relevant variables to be jointly
observed in a single dataset; however, it is sometimes difficult or even
impossible to collect such data in many real-world problems for technical or
privacy reasons. We consider a novel problem setting in which LATE, as a
function of covariates, is nonparametrically identified from the combination of
separately observed datasets. For estimation, we show that the direct least
squares method, which was originally developed for estimating the average
treatment effect under complete compliance, is applicable to our setting.
However, model selection and hyperparameter tuning for the direct least squares
estimator can be unstable in practice since it is defined as a solution to the
minimax problem. We then propose a weighted least squares estimator that
enables simpler model selection by avoiding the minimax objective formulation.
Unlike the inverse probability weighted (IPW) estimator, the proposed estimator
directly uses the pre-estimated weight without inversion, avoiding the problems
caused by the IPW methods. We demonstrate the effectiveness of our method
through experiments using synthetic and real-world datasets.
Related papers
- Assumption-Lean Post-Integrated Inference with Negative Control Outcomes [0.0]
We introduce a robust post-integrated inference (PII) method that adjusts for latent heterogeneity using negative control outcomes.
Our method extends to projected direct effect estimands, accounting for hidden mediators, confounders, and moderators.
The proposed doubly robust estimators are consistent and efficient under minimal assumptions and potential misspecification.
arXiv Detail & Related papers (2024-10-07T12:52:38Z) - Geometry-Aware Instrumental Variable Regression [56.16884466478886]
We propose a transport-based IV estimator that takes into account the geometry of the data manifold through data-derivative information.
We provide a simple plug-and-play implementation of our method that performs on par with related estimators in standard settings.
arXiv Detail & Related papers (2024-05-19T17:49:33Z) - Adaptive-TMLE for the Average Treatment Effect based on Randomized Controlled Trial Augmented with Real-World Data [0.0]
We consider the problem of estimating the average treatment effect (ATE) when both randomized control trial (RCT) data and real-world data (RWD) are available.
We introduce an adaptive targeted minimum loss-based estimation framework to estimate them.
arXiv Detail & Related papers (2024-05-12T07:10:26Z) - Robust Estimation of the Tail Index of a Single Parameter Pareto
Distribution from Grouped Data [0.0]
This paper introduces a novel robust estimation technique, the Method of Truncated Moments (MTuM)
Inferential justification of MTuM is established by employing the central limit theorem and validating them through a comprehensive simulation study.
arXiv Detail & Related papers (2024-01-26T01:42:06Z) - Statistical Limits of Adaptive Linear Models: Low-Dimensional Estimation
and Inference [5.924780594614676]
We show that the error of estimating a single coordinate can be enlarged by a multiple of $sqrtd$ when data are allowed to be arbitrarily adaptive.
We propose a novel estimator for single coordinate inference via solving a Two-stage Adaptive Linear Estimating equation (TALE)
arXiv Detail & Related papers (2023-10-01T00:45:09Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.