Double Machine Learning for Static Panel Models with Fixed Effects
- URL: http://arxiv.org/abs/2312.08174v5
- Date: Mon, 30 Dec 2024 19:05:38 GMT
- Title: Double Machine Learning for Static Panel Models with Fixed Effects
- Authors: Paul S. Clarke, Annalivia Polselli,
- Abstract summary: We develop novel machine learning procedures for panel data.
New procedures are extensions of the well-known correlated random effects, within-group and first-difference estimators.
We use our procedures to re-estimate the impact of minimum wage on voting behaviour in the UK.
- Score: 0.0
- License:
- Abstract: Recent advances in causal inference have seen the development of methods which make use of the predictive power of machine learning algorithms. In this paper, we develop novel double machine learning (DML) procedures for panel data in which these algorithms are used to approximate high-dimensional and nonlinear nuisance functions of the covariates. Our new procedures are extensions of the well-known correlated random effects, within-group and first-difference estimators from linear to nonlinear panel models, specifically, Robinson (1988)'s partially linear regression model with fixed effects and unspecified nonlinear confounding. Our simulation study assesses the performance of these procedures using different machine learning algorithms. We use our procedures to re-estimate the impact of minimum wage on voting behaviour in the UK. From our results, we recommend the use of first-differencing because it imposes the fewest constraints on the distribution of the fixed effects, and an ensemble learning strategy to ensure optimum estimator accuracy.
Related papers
- Neural Networks with Causal Graph Constraints: A New Approach for Treatment Effects Estimation [0.951494089949975]
We present a new model, NN-CGC, that considers additional information from the causal graph.
We show that our method is robust to imperfect causal graphs and that using partial causal information is preferable to ignoring it.
arXiv Detail & Related papers (2024-04-18T14:57:17Z) - Estimating Causal Effects with Double Machine Learning -- A Method Evaluation [5.904095466127043]
We review one of the most prominent methods - "double/debiased machine learning" (DML)
Our findings indicate that the application of a suitably flexible machine learning algorithm within DML improves the adjustment for various nonlinear confounding relationships.
When estimating the effects of air pollution on housing prices, we find that DML estimates are consistently larger than estimates of less flexible methods.
arXiv Detail & Related papers (2024-03-21T13:21:33Z) - Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning [53.97335841137496]
We propose an oracle-efficient algorithm, dubbed Pessimistic Least-Square Value Iteration (PNLSVI) for offline RL with non-linear function approximation.
Our algorithm enjoys a regret bound that has a tight dependency on the function class complexity and achieves minimax optimal instance-dependent regret when specialized to linear function approximation.
arXiv Detail & Related papers (2023-10-02T17:42:01Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [65.57123249246358]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Robustness Against Weak or Invalid Instruments: Exploring Nonlinear
Treatment Models with Machine Learning [1.3022753212679383]
We discuss causal inference for observational studies with possibly invalid instrumental variables.
We propose a novel methodology called two-stage curvature identification (TSCI) by exploring the nonlinear treatment model with machine learning.
arXiv Detail & Related papers (2022-03-24T02:19:24Z) - Nonlinear Least Squares for Large-Scale Machine Learning using
Stochastic Jacobian Estimates [0.0]
We exploit the property that the number of model parameters typically exceeds the data in one batch to compute search directions.
We develop two algorithms that estimate Jacobian matrices and perform well when compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-07-12T17:29:08Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Active Learning for Gaussian Process Considering Uncertainties with
Application to Shape Control of Composite Fuselage [7.358477502214471]
We propose two new active learning algorithms for the Gaussian process with uncertainties.
We show that the proposed approach can incorporate the impact from uncertainties, and realize better prediction performance.
This approach has been applied to improving the predictive modeling for automatic shape control of composite fuselage.
arXiv Detail & Related papers (2020-04-23T02:04:53Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.