CausalBench: A Large-scale Benchmark for Network Inference from
Single-cell Perturbation Data
- URL: http://arxiv.org/abs/2210.17283v2
- Date: Mon, 3 Jul 2023 09:12:49 GMT
- Title: CausalBench: A Large-scale Benchmark for Network Inference from
Single-cell Perturbation Data
- Authors: Mathieu Chevalley, Yusuf Roohani, Arash Mehrjou, Jure Leskovec,
Patrick Schwab
- Abstract summary: We introduce CausalBench, a benchmark suite for evaluating causal inference methods on real-world interventional data.
CaulBench incorporates biologically-motivated performance metrics, including new distribution-based interventional metrics.
- Score: 61.088705993848606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal inference is a vital aspect of multiple scientific disciplines and is
routinely applied to high-impact applications such as medicine. However,
evaluating the performance of causal inference methods in real-world
environments is challenging due to the need for observations under both
interventional and control conditions. Traditional evaluations conducted on
synthetic datasets do not reflect the performance in real-world systems. To
address this, we introduce CausalBench, a benchmark suite for evaluating
network inference methods on real-world interventional data from large-scale
single-cell perturbation experiments. CausalBench incorporates
biologically-motivated performance metrics, including new distribution-based
interventional metrics. A systematic evaluation of state-of-the-art causal
inference methods using our CausalBench suite highlights how poor scalability
of current methods limits performance. Moreover, methods that use
interventional information do not outperform those that only use observational
data, contrary to what is observed on synthetic benchmarks. Thus, CausalBench
opens new avenues in causal network inference research and provides a
principled and reliable way to track progress in leveraging real-world
interventional data.
Related papers
- Testing Generalizability in Causal Inference [3.547529079746247]
There is no formal procedure for statistically evaluating generalizability in machine learning algorithms.
We propose a systematic and quantitative framework for evaluating model generalizability in causal inference settings.
By basing simulations on real data, our method ensures more realistic evaluations, which is often missing in current work.
arXiv Detail & Related papers (2024-11-05T11:44:00Z) - Deriving Causal Order from Single-Variable Interventions: Guarantees & Algorithm [14.980926991441345]
We show that datasets containing interventional data can be effectively extracted under realistic assumptions about the data distribution.
We introduce interventional faithfulness, which relies on comparisons between the marginal distributions of each variable across observational and interventional settings.
We also introduce Intersort, an algorithm designed to infer the causal order from datasets containing large numbers of single-variable interventions.
arXiv Detail & Related papers (2024-05-28T16:07:17Z) - Benchmarking Bayesian Causal Discovery Methods for Downstream Treatment
Effect Estimation [137.3520153445413]
A notable gap exists in the evaluation of causal discovery methods, where insufficient emphasis is placed on downstream inference.
We evaluate seven established baseline causal discovery methods including a newly proposed method based on GFlowNets.
The results of our study demonstrate that some of the algorithms studied are able to effectively capture a wide range of useful and diverse ATE modes.
arXiv Detail & Related papers (2023-07-11T02:58:10Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Evaluating Causal Inference Methods [0.4588028371034407]
We introduce a deep generative model-based framework, Credence, to validate causal inference methods.
Our work introduces a deep generative model-based framework, Credence, to validate causal inference methods.
arXiv Detail & Related papers (2022-02-09T00:21:22Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Estimating the Effects of Continuous-valued Interventions using
Generative Adversarial Networks [103.14809802212535]
We build on the generative adversarial networks (GANs) framework to address the problem of estimating the effect of continuous-valued interventions.
Our model, SCIGAN, is flexible and capable of simultaneously estimating counterfactual outcomes for several different continuous interventions.
To address the challenges presented by shifting to continuous interventions, we propose a novel architecture for our discriminator.
arXiv Detail & Related papers (2020-02-27T18:46:21Z) - A Survey on Causal Inference [64.45536158710014]
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics.
Various causal effect estimation methods for observational data have sprung up.
arXiv Detail & Related papers (2020-02-05T21:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.