Flow IV: Counterfactual Inference In Nonseparable Outcome Models Using Instrumental Variables
- URL: http://arxiv.org/abs/2508.01321v1
- Date: Sat, 02 Aug 2025 11:24:03 GMT
- Title: Flow IV: Counterfactual Inference In Nonseparable Outcome Models Using Instrumental Variables
- Authors: Marc Braun, Jose M. Peña, Adel Daoud,
- Abstract summary: We show that under standard IV assumptions, along with the assumptions that latent noises in treatment and outcome are strictly monotonic and jointly Gaussian, the treatment-outcome relationship becomes uniquely identifiable from observed data.<n>This enables counterfactual inference even in nonseparable models.<n>We implement our approach by training a normalizing flow to maximize the likelihood of the observed data, demonstrating accurate recovery of the underlying outcome function.
- Score: 2.3213238782019316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To reach human level intelligence, learning algorithms need to incorporate causal reasoning. But identifying causality, and particularly counterfactual reasoning, remains an elusive task. In this paper, we make progress on this task by utilizing instrumental variables (IVs). IVs are a classic tool for mitigating bias from unobserved confounders when estimating causal effects. While IV methods have been extended to non-separable structural models at the population level, existing approaches to counterfactual prediction typically assume additive noise in the outcome. In this paper, we show that under standard IV assumptions, along with the assumptions that latent noises in treatment and outcome are strictly monotonic and jointly Gaussian, the treatment-outcome relationship becomes uniquely identifiable from observed data. This enables counterfactual inference even in nonseparable models. We implement our approach by training a normalizing flow to maximize the likelihood of the observed data, demonstrating accurate recovery of the underlying outcome function. We call our method Flow IV.
Related papers
- Distributional Instrumental Variable Method [4.34680331569334]
The aim of this work is to estimate the entire interventional distribution.<n>We propose a method called Distributional Instrumental Variable (DIV), which uses generative modelling in a nonlinear IV setting.
arXiv Detail & Related papers (2025-02-11T15:33:06Z) - Disentangled Representation Learning for Causal Inference with Instruments [31.67220687652054]
Existing IV based estimators need a known IV or other strong assumptions, such as the existence of two or more IVs in the system.<n>In this paper, we consider a relaxed requirement, which assumes there is an IV proxy in the system without knowing which variable is the proxy.<n>We propose a Variational AutoEncoder (VAE) based disentangled representation learning method to learn an IV representation from a dataset with latent confounders.
arXiv Detail & Related papers (2024-12-05T22:18:48Z) - Regularized DeepIV with Model Selection [72.17508967124081]
Regularized DeepIV (RDIV) regression can converge to the least-norm IV solution.
Our method matches the current state-of-the-art convergence rate.
arXiv Detail & Related papers (2024-03-07T05:38:56Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Causal Inference with Conditional Instruments using Deep Generative
Models [21.771832598942677]
A standard IV is expected to be related to the treatment variable and independent of all other variables in the system.
conditional IV (CIV) method has been proposed to allow a variable to be an instrument conditioning on a set of variables.
We propose to learn the representations of a CIV and its conditioning set from data with latent confounders for average causal effect estimation.
arXiv Detail & Related papers (2022-11-29T14:31:54Z) - Confounder Balancing for Instrumental Variable Regression with Latent
Variable [29.288045682505615]
This paper studies the confounding effects from the unmeasured confounders and the imbalance of observed confounders in IV regression.
We propose a Confounder Balanced IV Regression (CB-IV) algorithm to remove the bias from the unmeasured confounders and the imbalance of observed confounders.
arXiv Detail & Related papers (2022-11-18T03:13:53Z) - Active Bayesian Causal Inference [72.70593653185078]
We propose Active Bayesian Causal Inference (ABCI), a fully-Bayesian active learning framework for integrated causal discovery and reasoning.
ABCI jointly infers a posterior over causal models and queries of interest.
We show that our approach is more data-efficient than several baselines that only focus on learning the full causal graph.
arXiv Detail & Related papers (2022-06-04T22:38:57Z) - Ancestral Instrument Method for Causal Inference without Complete
Knowledge [0.0]
Unobserved confounding is the main obstacle to causal effect estimation from observational data.
Conditional IVs have been proposed to relax the requirement of standard IVs by conditioning on a set of observed variables.
We develop an algorithm for unbiased causal effect estimation with a given ancestral IV and observational data.
arXiv Detail & Related papers (2022-01-11T07:02:16Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z) - Instrumental Variable Value Iteration for Causal Offline Reinforcement Learning [107.70165026669308]
In offline reinforcement learning (RL) an optimal policy is learned solely from a priori collected observational data.
We study a confounded Markov decision process where the transition dynamics admit an additive nonlinear functional form.
We propose a provably efficient IV-aided Value Iteration (IVVI) algorithm based on a primal-dual reformulation of the conditional moment restriction.
arXiv Detail & Related papers (2021-02-19T13:01:40Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.