NESTER: An Adaptive Neurosymbolic Method for Causal Effect Estimation
- URL: http://arxiv.org/abs/2211.04370v5
- Date: Mon, 8 Jan 2024 07:01:01 GMT
- Title: NESTER: An Adaptive Neurosymbolic Method for Causal Effect Estimation
- Authors: Abbavaram Gowtham Reddy, Vineeth N Balasubramanian
- Abstract summary: Causal effect estimation from observational data is a central problem in causal inference.
We propose an adaptive method called Neurosymbolic Causal Effect Estimator (NESTER)
Our comprehensive empirical results show that NESTER performs better than state-of-the-art methods on benchmark datasets.
- Score: 37.361149306896024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal effect estimation from observational data is a central problem in
causal inference. Methods based on potential outcomes framework solve this
problem by exploiting inductive biases and heuristics from causal inference.
Each of these methods addresses a specific aspect of causal effect estimation,
such as controlling propensity score, enforcing randomization, etc., by
designing neural network (NN) architectures and regularizers. In this paper, we
propose an adaptive method called Neurosymbolic Causal Effect Estimator
(NESTER), a generalized method for causal effect estimation. NESTER integrates
the ideas used in existing methods based on multi-head NNs for causal effect
estimation into one framework. We design a Domain Specific Language (DSL)
tailored for causal effect estimation based on causal inductive biases used in
literature. We conduct a theoretical analysis to investigate NESTER's efficacy
in estimating causal effects. Our comprehensive empirical results show that
NESTER performs better than state-of-the-art methods on benchmark datasets.
Related papers
- Generative Intervention Models for Causal Perturbation Modeling [80.72074987374141]
In many applications, it is a priori unknown which mechanisms of a system are modified by an external perturbation.
We propose a generative intervention model (GIM) that learns to map these perturbation features to distributions over atomic interventions.
arXiv Detail & Related papers (2024-11-21T10:37:57Z) - C-XGBoost: A tree boosting model for causal effect estimation [8.246161706153805]
Causal effect estimation aims at estimating the Average Treatment Effect as well as the Conditional Average Treatment Effect of a treatment to an outcome from the available data.
We propose a new causal inference model, named C-XGBoost, for the prediction of potential outcomes.
arXiv Detail & Related papers (2024-03-31T17:43:37Z) - A Neural Framework for Generalized Causal Sensitivity Analysis [78.71545648682705]
We propose NeuralCSA, a neural framework for causal sensitivity analysis.
We provide theoretical guarantees that NeuralCSA is able to infer valid bounds on the causal query of interest.
arXiv Detail & Related papers (2023-11-27T17:40:02Z) - Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment
Effect Estimation [137.3520153445413]
A notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference.
We evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets.
The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes.
arXiv Detail & Related papers (2023-07-11T02:58:10Z) - Integrating Nearest Neighbors with Neural Network Models for Treatment
Effect Estimation [3.1372269816123994]
We propose Nearest Neighboring Information for Causal Inference (NNCI) for integrating valuable nearest neighboring information on neural network-based models for estimating treatment effects.
NNCI is applied to some of the most well established neural network-based models for treatment effect estimation with the use of observational data.
arXiv Detail & Related papers (2023-05-11T13:24:10Z) - Towards Learning and Explaining Indirect Causal Effects in Neural
Networks [22.658383399117003]
We view an NN as a structural causal model (SCM) and extend our focus to include indirect causal effects by introducing feedforward connections among input neurons.
We propose an ante-hoc method that captures and maintains direct, indirect, and total causal effects during NN model training.
We also propose an algorithm for quantifying learned causal effects in an NN model and efficient approximation strategies for quantifying causal effects in high-dimensional data.
arXiv Detail & Related papers (2023-03-24T08:17:31Z) - CausalDialogue: Modeling Utterance-level Causality in Conversations [83.03604651485327]
We have compiled and expanded upon a new dataset called CausalDialogue through crowd-sourcing.
This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure.
We propose a causality-enhanced method called Exponential Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models.
arXiv Detail & Related papers (2022-12-20T18:31:50Z) - An evaluation framework for comparing causal inference models [3.1372269816123994]
We use the proposed evaluation methodology to compare several state-of-the-art causal effect estimation models.
The main motivation behind this approach is the elimination of the influence of a small number of instances or simulation on the benchmarking process.
arXiv Detail & Related papers (2022-08-31T21:04:20Z) - Causal Effect Estimation using Variational Information Bottleneck [19.6760527269791]
Causal inference is to estimate the causal effect in a causal relationship when intervention is applied.
We propose a method to estimate Causal Effect by using Variational Information Bottleneck (CEVIB)
arXiv Detail & Related papers (2021-10-26T13:46:12Z) - Efficient Causal Inference from Combined Observational and
Interventional Data through Causal Reductions [68.6505592770171]
Unobserved confounding is one of the main challenges when estimating causal effects.
We propose a novel causal reduction method that replaces an arbitrary number of possibly high-dimensional latent confounders.
We propose a learning algorithm to estimate the parameterized reduced model jointly from observational and interventional data.
arXiv Detail & Related papers (2021-03-08T14:29:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.