Generalized Bayes for Causal Inference
- URL: http://arxiv.org/abs/2603.03035v1
- Date: Tue, 03 Mar 2026 14:27:23 GMT
- Title: Generalized Bayes for Causal Inference
- Authors: Emil Javurek, Dennis Frauen, Yuxin Wang, Stefan Feuerriegel,
- Abstract summary: Uncertainty quantification is central to many applications of causal machine learning.<n>Standard Bayesian approaches typically require specifying a probabilistic model for the data-generating process.<n>We propose a generalized Bayesian framework for causal inference.
- Score: 47.66564705927885
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification is central to many applications of causal machine learning, yet principled Bayesian inference for causal effects remains challenging. Standard Bayesian approaches typically require specifying a probabilistic model for the data-generating process, including high-dimensional nuisance components such as propensity scores and outcome regressions. Standard posteriors are thus vulnerable to strong modeling choices, including complex prior elicitation. In this paper, we propose a generalized Bayesian framework for causal inference. Our framework avoids explicit likelihood modeling; instead, we place priors directly on the causal estimands and update these using an identification-driven loss function, which yields generalized posteriors for causal effects. As a result, our framework turns existing loss-based causal estimators into estimators with full uncertainty quantification. Our framework is flexible and applicable to a broad range of causal estimands (e.g., ATE, CATE). Further, our framework can be applied on top of state-of-the-art causal machine learning pipelines (e.g., Neyman-orthogonal meta-learners). For Neyman-orthogonal losses, we show that the generalized posteriors converge to their oracle counterparts and remain robust to first-stage nuisance estimation error. With calibration, we thus obtain valid frequentist uncertainty even when nuisance estimators converge at slower-than-parametric rates. Empirically, we demonstrate that our proposed framework offers causal effect estimation with calibrated uncertainty across several causal inference settings. To the best of our knowledge, this is the first flexible framework for constructing generalized Bayesian posteriors for causal machine learning.
Related papers
- Uncertainty Quantification for Regression using Proper Scoring Rules [76.24649098854219]
We introduce a unified UQ framework for regression based on proper scoring rules, such as CRPS, logarithmic, squared error, and quadratic scores.<n>We derive closed-form expressions for the uncertainty measures under practical parametric assumptions and show how to estimate them using ensembles of models.<n>Our broad evaluation on synthetic and real-world regression datasets provides guidance for selecting reliable UQ measures.
arXiv Detail & Related papers (2025-09-30T17:52:12Z) - Bayesian Pliable Lasso with Horseshoe Prior for Interaction Effects in GLMs with Missing Responses [0.0]
We propose a pliable lasso that places sparsity-inducing priors, such as the horseshoe, on both main and interaction effects.<n>Our framework yields sparse, interpretable interaction structures, and principled measures of uncertainty.<n>Our method is implemented in the package texttthspliable available on Github.
arXiv Detail & Related papers (2025-09-09T08:28:21Z) - Principled Input-Output-Conditioned Post-Hoc Uncertainty Estimation for Regression Networks [1.4671424999873808]
Uncertainty is critical in safety-sensitive applications but is often omitted from off-the-shelf neural networks due to adverse effects on predictive performance.<n>We propose a theoretically grounded framework for post-hoc uncertainty estimation in regression tasks by fitting an auxiliary model to both original inputs and frozen model outputs.
arXiv Detail & Related papers (2025-06-01T09:13:27Z) - Generalization Certificates for Adversarially Robust Bayesian Linear Regression [16.3368950151084]
Adversarial robustness of machine learning models is critical to ensuring reliable performance under data perturbations.<n>Recent progress has been on point estimators, and this paper considers distributional predictors.<n>Experiments on real and synthetic datasets demonstrate the superior robustness of the derived adversarially robust posterior over Bayes posterior.
arXiv Detail & Related papers (2025-02-20T06:25:30Z) - Effective Bayesian Causal Inference via Structural Marginalisation and Autoregressive Orders [16.682775063684907]
We study the use of uncertainty in causal inference over all causal models.<n>We decompose structure marginalisation into the marginalisation over (i) causal orders and (ii) directed acyclic graphs (DAGs) given an order.<n>Our method outperforms state-of-the-art in structure learning on simulated non-linear additive noise benchmarks.
arXiv Detail & Related papers (2024-02-22T18:39:24Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Robust Bayesian Inference for Berkson and Classical Measurement Error Models [9.712913056924826]
We propose a nonparametric framework for dealing with measurement error.
It is suitable for both Classical and Berkson error models.
It offers flexibility in the choice of loss function depending on the type of regression model.
arXiv Detail & Related papers (2023-06-02T11:48:15Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.