Accelerating Recursive Partition-Based Causal Structure Learning
- URL: http://arxiv.org/abs/2102.11545v1
- Date: Tue, 23 Feb 2021 08:28:55 GMT
- Title: Accelerating Recursive Partition-Based Causal Structure Learning
- Authors: Md. Musfiqur Rahman, Ayman Rasheed, Md. Mosaddek Khan, Mohammad Ali
Javidian, Pooyan Jamshidi and Md. Mamun-Or-Rashid
- Abstract summary: Recursive causal discovery algorithms provide good results by using Conditional Independent (CI) tests in smaller sub-problems.
This paper proposes a generic causal structure refinement strategy that can locate the undesired relations with a small number of CI-tests.
We then empirically evaluate its performance against the state-of-the-art algorithms in terms of solution quality and completion time in synthetic and real datasets.
- Score: 4.357523892518871
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Causal structure discovery from observational data is fundamental to the
causal understanding of autonomous systems such as medical decision support
systems, advertising campaigns and self-driving cars. This is essential to
solve well-known causal decision making and prediction problems associated with
those real-world applications. Recently, recursive causal discovery algorithms
have gained particular attention among the research community due to their
ability to provide good results by using Conditional Independent (CI) tests in
smaller sub-problems. However, each of such algorithms needs a refinement
function to remove undesired causal relations of the discovered graphs.
Notably, with the increase of the problem size, the computation cost (i.e., the
number of CI-tests) of the refinement function makes an algorithm expensive to
deploy in practice. This paper proposes a generic causal structure refinement
strategy that can locate the undesired relations with a small number of
CI-tests, thus speeding up the algorithm for large and complex problems. We
theoretically prove the correctness of our algorithm. We then empirically
evaluate its performance against the state-of-the-art algorithms in terms of
solution quality and completion time in synthetic and real datasets.
Related papers
- Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Towards model-free RL algorithms that scale well with unstructured data [1.3799571823220778]
We provide an algorithm that constructs reward-relevant general value function questions to find and exploit predictive structure directly from the experience stream.
The proposed algorithm reliably outperforms a conventional deep RL algorithm on these scaling problems.
arXiv Detail & Related papers (2023-11-03T20:03:54Z) - Subset verification and search algorithms for causal DAGs [13.108039226223793]
We study the problem of identifying the smallest set of interventions required to learn the causal relationships between a subset of edges (target edges)
We prove several interesting structural properties of interventional causal graphs that we believe have applications beyond the subset verification/search problems studied here.
arXiv Detail & Related papers (2023-01-09T06:25:44Z) - Learning to Bound Counterfactual Inference in Structural Causal Models
from Observational and Randomised Data [64.96984404868411]
We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm.
The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources.
It delivers interval approximations to counterfactual results, which collapse to points in the identifiable case.
arXiv Detail & Related papers (2022-12-06T12:42:11Z) - A Meta-Reinforcement Learning Algorithm for Causal Discovery [3.4806267677524896]
Causal structures can enable models to go beyond pure correlation-based inference.
Finding causal structures from data poses a significant challenge both in computational effort and accuracy.
We develop a meta-reinforcement learning algorithm that performs causal discovery by learning to perform interventions.
arXiv Detail & Related papers (2022-07-18T09:26:07Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - MIRACLE: Causally-Aware Imputation via Learning Missing Data Mechanisms [82.90843777097606]
We propose a causally-aware imputation algorithm (MIRACLE) for missing data.
MIRACLE iteratively refines the imputation of a baseline by simultaneously modeling the missingness generating mechanism.
We conduct extensive experiments on synthetic and a variety of publicly available datasets to show that MIRACLE is able to consistently improve imputation.
arXiv Detail & Related papers (2021-11-04T22:38:18Z) - Improving Efficiency and Accuracy of Causal Discovery Using a
Hierarchical Wrapper [7.570246812206772]
Causal discovery from observational data is an important tool in many branches of science.
In the large sample limit, sound and complete causal discovery algorithms have been previously introduced.
However, only finite training data is available, which limits the power of statistical tests used by these algorithms.
arXiv Detail & Related papers (2021-07-11T09:24:49Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.