Forecasting Algorithms for Causal Inference with Panel Data
- URL: http://arxiv.org/abs/2208.03489v3
- Date: Tue, 16 Apr 2024 19:08:01 GMT
- Title: Forecasting Algorithms for Causal Inference with Panel Data
- Authors: Jacob Goldin, Julian Nyarko, Justin Young,
- Abstract summary: We adapt a deep neural architecture for time series forecasting (the N-BEATS algorithm) to more accurately impute the counterfactual evolution of a treated unit had treatment not occurred.
Across a range of settings, the resulting estimator (SyNBEATS'') significantly outperforms commonly employed methods.
An implementation of this estimator is available for public use.
- Score: 1.4628956155379451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conducting causal inference with panel data is a core challenge in social science research. We adapt a deep neural architecture for time series forecasting (the N-BEATS algorithm) to more accurately impute the counterfactual evolution of a treated unit had treatment not occurred. Across a range of settings, the resulting estimator (``SyNBEATS'') significantly outperforms commonly employed methods (synthetic controls, two-way fixed effects), and attains comparable or more accurate performance compared to recently proposed methods (synthetic difference-in-differences, matrix completion). An implementation of this estimator is available for public use. Our results highlight how advances in the forecasting literature can be harnessed to improve causal inference in panel data settings.
Related papers
- A Meta-learner for Heterogeneous Effects in Difference-in-Differences [17.361857058902494]
We propose a doubly robust meta-learner for the estimation of the Conditional Average Treatment Effect on the Treated (CATT)
Our framework allows for the flexible estimation of the CATT, when conditioning on any subset of variables of interest using generic machine learning.
arXiv Detail & Related papers (2025-02-07T07:04:37Z) - Causal Rule Forest: Toward Interpretable and Precise Treatment Effect Estimation [0.0]
Causal Rule Forest (CRF) is a novel approach to learning hidden patterns from data and transforming the patterns into interpretable multi-level Boolean rules.
By training the other interpretable causal inference models with data representation learned by CRF, we can reduce the predictive errors of these models in estimating Heterogeneous Treatment Effects (HTE) and Conditional Average Treatment Effects (CATE)
Our experiments underscore the potential of CRF to advance personalized interventions and policies.
arXiv Detail & Related papers (2024-08-27T13:32:31Z) - Efficient adjustment for complex covariates: Gaining efficiency with
DOPE [56.537164957672715]
We propose a framework that accommodates adjustment for any subset of information expressed by the covariates.
Based on our theoretical results, we propose the Debiased Outcome-adapted Propensity Estorimator (DOPE) for efficient estimation of the average treatment effect (ATE)
Our results show that the DOPE provides an efficient and robust methodology for ATE estimation in various observational settings.
arXiv Detail & Related papers (2024-02-20T13:02:51Z) - Targeted Machine Learning for Average Causal Effect Estimation Using the
Front-Door Functional [3.0232957374216953]
evaluating the average causal effect (ACE) of a treatment on an outcome often involves overcoming the challenges posed by confounding factors in observational studies.
Here, we introduce novel estimation strategies for the front-door criterion based on the targeted minimum loss-based estimation theory.
We demonstrate the applicability of these estimators to analyze the effect of early stage academic performance on future yearly income.
arXiv Detail & Related papers (2023-12-15T22:04:53Z) - Counterfactual Data Augmentation with Contrastive Learning [27.28511396131235]
We introduce a model-agnostic data augmentation method that imputes the counterfactual outcomes for a selected subset of individuals.
We use contrastive learning to learn a representation space and a similarity measure such that in the learned representation space close individuals identified by the learned similarity measure have similar potential outcomes.
This property ensures reliable imputation of counterfactual outcomes for the individuals with close neighbors from the alternative treatment group.
arXiv Detail & Related papers (2023-11-07T00:36:51Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Forecast reconciliation for vaccine supply chain optimization [61.13962963550403]
Vaccine supply chain optimization can benefit from hierarchical time series forecasting.
Forecasts of different hierarchy levels become incoherent when higher levels do not match the sum of the lower levels forecasts.
We tackle the vaccine sale forecasting problem by modeling sales data from GSK between 2010 and 2021 as a hierarchical time series.
arXiv Detail & Related papers (2023-05-02T14:34:34Z) - Doing Great at Estimating CATE? On the Neglected Assumptions in
Benchmark Comparisons of Treatment Effect Estimators [91.3755431537592]
We show that even in arguably the simplest setting, estimation under ignorability assumptions can be misleading.
We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators.
We highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others.
arXiv Detail & Related papers (2021-07-28T13:21:27Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - CASTLE: Regularization via Auxiliary Causal Graph Discovery [89.74800176981842]
We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables.
CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features.
arXiv Detail & Related papers (2020-09-28T09:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.