Detection of Unobserved Common Causes based on NML Code in Discrete,
Mixed, and Continuous Variables
- URL: http://arxiv.org/abs/2403.06499v1
- Date: Mon, 11 Mar 2024 08:11:52 GMT
- Title: Detection of Unobserved Common Causes based on NML Code in Discrete,
Mixed, and Continuous Variables
- Authors: Masatoshi Kobayashi, Kohei Miyagichi, Shin Matsushima
- Abstract summary: We categorize all possible causal relationships between two random variables into the following four categories.
We show that CLOUD is more effective than existing methods in inferring causal relationships by extensive experiments on both synthetic and real-world data.
- Score: 1.5039745292757667
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Causal discovery in the presence of unobserved common causes from
observational data only is a crucial but challenging problem. We categorize all
possible causal relationships between two random variables into the following
four categories and aim to identify one from observed data: two cases in which
either of the direct causality exists, a case that variables are independent,
and a case that variables are confounded by latent confounders. Although
existing methods have been proposed to tackle this problem, they require
unobserved variables to satisfy assumptions on the form of their equation
models. In our previous study (Kobayashi et al., 2022), the first causal
discovery method without such assumptions is proposed for discrete data and
named CLOUD. Using Normalized Maximum Likelihood (NML) Code, CLOUD selects a
model that yields the minimum codelength of the observed data from a set of
model candidates. This paper extends CLOUD to apply for various data types
across discrete, mixed, and continuous. We not only performed theoretical
analysis to show the consistency of CLOUD in terms of the model selection, but
also demonstrated that CLOUD is more effective than existing methods in
inferring causal relationships by extensive experiments on both synthetic and
real-world data.
Related papers
- Score matching through the roof: linear, nonlinear, and latent variables causal discovery [18.46845413928147]
Causal discovery from observational data holds great promise.
Existing methods rely on strong assumptions about the underlying causal structure.
We propose a flexible algorithm for causal discovery across linear, nonlinear, and latent variable models.
arXiv Detail & Related papers (2024-07-26T14:09:06Z) - Federated Causal Discovery from Heterogeneous Data [70.31070224690399]
We propose a novel FCD method attempting to accommodate arbitrary causal models and heterogeneous data.
These approaches involve constructing summary statistics as a proxy of the raw data to protect data privacy.
We conduct extensive experiments on synthetic and real datasets to show the efficacy of our method.
arXiv Detail & Related papers (2024-02-20T18:53:53Z) - Identifiable Latent Polynomial Causal Models Through the Lens of Change [82.14087963690561]
Causal representation learning aims to unveil latent high-level causal representations from observed low-level data.
One of its primary tasks is to provide reliable assurance of identifying these latent causal models, known as identifiability.
arXiv Detail & Related papers (2023-10-24T07:46:10Z) - Discovering Mixtures of Structural Causal Models from Time Series Data [23.18511951330646]
We propose a general variational inference-based framework called MCD to infer the underlying causal models.
Our approach employs an end-to-end training process that maximizes an evidence-lower bound for the data likelihood.
We demonstrate that our method surpasses state-of-the-art benchmarks in causal discovery tasks.
arXiv Detail & Related papers (2023-10-10T05:13:10Z) - Nonparametric Identifiability of Causal Representations from Unknown
Interventions [63.1354734978244]
We study causal representation learning, the task of inferring latent causal variables and their causal relations from mixtures of the variables.
Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
arXiv Detail & Related papers (2023-06-01T10:51:58Z) - Distinguishing Cause from Effect on Categorical Data: The Uniform
Channel Model [0.0]
Distinguishing cause from effect using observations of a pair of random variables is a core problem in causal discovery.
We propose a criterion to address the cause-effect problem with categorical variables.
We select as the most likely causal direction the one in which the conditional probability mass function is closer to a uniform channel (UC)
arXiv Detail & Related papers (2023-03-14T13:54:11Z) - Causality-Based Multivariate Time Series Anomaly Detection [63.799474860969156]
We formulate the anomaly detection problem from a causal perspective and view anomalies as instances that do not follow the regular causal mechanism to generate the multivariate data.
We then propose a causality-based anomaly detection approach, which first learns the causal structure from data and then infers whether an instance is an anomaly relative to the local causal mechanism.
We evaluate our approach with both simulated and public datasets as well as a case study on real-world AIOps applications.
arXiv Detail & Related papers (2022-06-30T06:00:13Z) - MissDAG: Causal Discovery in the Presence of Missing Data with
Continuous Additive Noise Models [78.72682320019737]
We develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.
MissDAG maximizes the expected likelihood of the visible part of observations under the expectation-maximization framework.
We demonstrate the flexibility of MissDAG for incorporating various causal discovery algorithms and its efficacy through extensive simulations and real data experiments.
arXiv Detail & Related papers (2022-05-27T09:59:46Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Information-Theoretic Approximation to Causal Models [0.0]
We show that it is possible to solve the problem of inferring the causal direction and causal effect between two random variables from a finite sample.
We embed distributions that originate from samples of X and Y into a higher dimensional probability space.
We show that this information-theoretic approximation to causal models (IACM) can be done by solving a linear optimization problem.
arXiv Detail & Related papers (2020-07-29T18:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.