Neural Networks with Causal Graph Constraints: A New Approach for Treatment Effects Estimation
- URL: http://arxiv.org/abs/2404.12238v1
- Date: Thu, 18 Apr 2024 14:57:17 GMT
- Title: Neural Networks with Causal Graph Constraints: A New Approach for Treatment Effects Estimation
- Authors: Roger Pros, Jordi VitriĆ ,
- Abstract summary: We present a new model, NN-CGC, that considers additional information from the causal graph.
We show that our method is robust to imperfect causal graphs and that using partial causal information is preferable to ignoring it.
- Score: 0.951494089949975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, there has been a growing interest in using machine learning techniques for the estimation of treatment effects. Most of the best-performing methods rely on representation learning strategies that encourage shared behavior among potential outcomes to increase the precision of treatment effect estimates. In this paper we discuss and classify these models in terms of their algorithmic inductive biases and present a new model, NN-CGC, that considers additional information from the causal graph. NN-CGC tackles bias resulting from spurious variable interactions by implementing novel constraints on models, and it can be integrated with other representation learning methods. We test the effectiveness of our method using three different base models on common benchmarks. Our results indicate that our model constraints lead to significant improvements, achieving new state-of-the-art results in treatment effects estimation. We also show that our method is robust to imperfect causal graphs and that using partial causal information is preferable to ignoring it.
Related papers
- Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration [74.09687562334682]
We introduce a novel training data attribution method called Debias and Denoise Attribution (DDA)
Our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%.
DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.
arXiv Detail & Related papers (2024-10-02T07:14:26Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - C-XGBoost: A tree boosting model for causal effect estimation [8.246161706153805]
Causal effect estimation aims at estimating the Average Treatment Effect as well as the Conditional Average Treatment Effect of a treatment to an outcome from the available data.
We propose a new causal inference model, named C-XGBoost, for the prediction of potential outcomes.
arXiv Detail & Related papers (2024-03-31T17:43:37Z) - Uplift Modeling based on Graph Neural Network Combined with Causal
Knowledge [9.005051998738134]
We propose a framework based on graph neural networks that combine causal knowledge with an estimate of uplift value.
Our findings demonstrate that this method works effectively for predicting uplift values, with small errors in typical simulated data.
arXiv Detail & Related papers (2023-11-14T07:21:00Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - From Causal Pairs to Causal Graphs [1.5469452301122175]
Causal structure learning from observational data remains a non-trivial task.
Motivated by the Cause-Effect Pair' NIPS 2013 Workshop on Causality Challenge, we take a different approach and generate a probability distribution over all possible graphs.
The goal of the paper is to propose new methods based on this probabilistic information and compare their performance with traditional and state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-08T15:28:55Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - An improved neural network model for treatment effect estimation [3.1372269816123994]
We propose a new model for predicting the potential outcomes and the propensity score, which is based on a neural network architecture.
Numerical experiments illustrate that the proposed model reports better treatment effect estimation performance compared to state-of-the-art models.
arXiv Detail & Related papers (2022-05-23T07:56:06Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.