Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks
- URL: http://arxiv.org/abs/2002.12326v2
- Date: Sun, 22 Nov 2020 20:17:54 GMT
- Title: Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks
- Authors: Ioana Bica, James Jordon, Mihaela van der Schaar
- Abstract summary: We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
- Score: 103.14809802212535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While much attention has been given to the problem of estimating the effect
of discrete interventions from observational data, relatively little work has
been done in the setting of continuous-valued interventions, such as treatments
associated with a dosage parameter. In this paper, we tackle this problem by
building on a modification of the generative adversarial networks (GANs)
framework. Our model, SCIGAN, is flexible and capable of simultaneously
estimating counterfactual outcomes for several different continuous
interventions. The key idea is to use a significantly modified GAN model to
learn to generate counterfactual outcomes, which can then be used to learn an
inference model, using standard supervised methods, capable of estimating these
counterfactuals for a new sample. To address the challenges presented by
shifting to continuous interventions, we propose a novel architecture for our
discriminator - we build a hierarchical discriminator that leverages the
structure of the continuous intervention setting. Moreover, we provide
theoretical results to support our use of the GAN framework and of the
hierarchical discriminator. In the experiments section, we introduce a new
semi-synthetic data simulation for use in the continuous intervention setting
and demonstrate improvements over the existing benchmark models.
Related papers
- Exogenous Matching: Learning Good Proposals for Tractable Counterfactual Estimation [1.9662978733004601]
We propose an importance sampling method for tractable and efficient estimation of counterfactual expressions.
By minimizing a common upper bound of counterfactual estimators, we transform the variance minimization problem into a conditional distribution learning problem.
We validate the theoretical results through experiments under various types and settings of Structural Causal Models (SCMs) and demonstrate the outperformance on counterfactual estimation tasks.
arXiv Detail & Related papers (2024-10-17T03:08:28Z) - Causal Rule Forest: Toward Interpretable and Precise Treatment Effect Estimation [0.0]
Causal Rule Forest (CRF) is a novel approach to learning hidden patterns from data and transforming the patterns into interpretable multi-level Boolean rules.
By training the other interpretable causal inference models with data representation learned by CRF, we can reduce the predictive errors of these models in estimating Heterogeneous Treatment Effects (HTE) and Conditional Average Treatment Effects (CATE)
Our experiments underscore the potential of CRF to advance personalized interventions and policies.
arXiv Detail & Related papers (2024-08-27T13:32:31Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Composable Interventions for Language Models [60.32695044723103]
Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining.
But despite a flood of new methods, different types of interventions are largely developing independently.
We introduce composable interventions, a framework to study the effects of using multiple interventions on the same language models.
arXiv Detail & Related papers (2024-07-09T01:17:44Z) - Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm [14.980926991441345]
We show that datasets containing interventional data can be effectively extracted under realistic assumptions about the data distribution.
We introduce interventional faithfulness, which relies on comparisons between the marginal distributions of each variable across observational and interventional settings.
We also introduce Intersort, an algorithm designed to infer the causal order from datasets containing large numbers of single-variable interventions.
arXiv Detail & Related papers (2024-05-28T16:07:17Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - A Closer Look at the Intervention Procedure of Concept Bottleneck Models [18.222350428973343]
Concept bottleneck models (CBMs) are a class of interpretable neural network models that predict the target response of a given input based on its high-level concepts.
CBMs enable domain experts to intervene on the predicted concepts and rectify any mistakes at test time, so that more accurate task predictions can be made at the end.
We develop various ways of selecting intervening concepts to improve the intervention effectiveness and conduct an array of in-depth analyses as to how they evolve under different circumstances.
arXiv Detail & Related papers (2023-02-28T02:37:24Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Causal Modeling with Stochastic Confounders [11.881081802491183]
This work extends causal inference with confounders.
We propose a new approach to variational estimation for causal inference based on a representer theorem with a random input space.
arXiv Detail & Related papers (2020-04-24T00:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.