Recovering Causal Structures from Low-Order Conditional Independencies
- URL: http://arxiv.org/abs/2010.02675v1
- Date: Tue, 6 Oct 2020 12:47:20 GMT
- Title: Recovering Causal Structures from Low-Order Conditional Independencies
- Authors: Marcel Wien\"obst and Maciej Li\'skiewicz
- Abstract summary: We propose an algorithm for a given set of conditional independencies of order less or equal to $k$, where $k$ is a small fixed number.
Our results complete and generalize the previous work on learning from pairwise marginal independencies.
- Score: 6.891238879512672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the common obstacles for learning causal models from data is that
high-order conditional independence (CI) relationships between random variables
are difficult to estimate. Since CI tests with conditioning sets of low order
can be performed accurately even for a small number of observations, a
reasonable approach to determine casual structures is to base merely on the
low-order CIs. Recent research has confirmed that, e.g. in the case of sparse
true causal models, structures learned even from zero- and first-order
conditional independencies yield good approximations of the models. However, a
challenging task here is to provide methods that faithfully explain a given set
of low-order CIs. In this paper, we propose an algorithm which, for a given set
of conditional independencies of order less or equal to $k$, where $k$ is a
small fixed number, computes a faithful graphical representation of the given
set. Our results complete and generalize the previous work on learning from
pairwise marginal independencies. Moreover, they enable to improve upon the 0-1
graph model which, e.g. is heavily used in the estimation of genome networks.
Related papers
- Identification and Estimation of Simultaneous Equation Models Using Higher-Order Cumulant Restrictions [5.882065571122133]
Identifying structural parameters in linear simultaneous-equation models is a longstanding challenge.<n>We show that neither zero covariance proofs nor whitening is necessary to identify structural parameters.<n>Our framework provides a transparent overidentification test.
arXiv Detail & Related papers (2025-01-12T11:27:39Z) - Instability and Local Minima in GAN Training with Kernel Discriminators [20.362912591032636]
Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data.
Despite their empirical success, the training of GANs is not fully understood due to the min-max optimization of the generator and discriminator.
This paper analyzes these joint dynamics when the true samples, as well as the generated samples, are discrete, finite sets, and the discriminator is kernel-based.
arXiv Detail & Related papers (2022-08-21T18:03:06Z) - A Simple Unified Approach to Testing High-Dimensional Conditional
Independences for Categorical and Ordinal Data [0.26651200086513094]
Conditional independence (CI) tests underlie many approaches to model testing and structure learning in causal inference.
Most existing CI tests for categorical and ordinal data stratify the sample by the conditioning variables, perform simple independence tests in each stratum, and combine the results.
Here we propose a simple unified CI test for ordinal and categorical data that maintains reasonable calibration and power in high dimensions.
arXiv Detail & Related papers (2022-06-09T08:56:12Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Multi-task Learning of Order-Consistent Causal Graphs [59.9575145128345]
We consider the problem of discovering $K related Gaussian acyclic graphs (DAGs)
Under multi-task learning setting, we propose a $l_1/l$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models.
We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order.
arXiv Detail & Related papers (2021-11-03T22:10:18Z) - Outlier-Robust Learning of Ising Models Under Dobrushin's Condition [57.89518300699042]
We study the problem of learning Ising models satisfying Dobrushin's condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted.
Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees.
arXiv Detail & Related papers (2021-02-03T18:00:57Z) - Sample-efficient L0-L2 constrained structure learning of sparse Ising
models [3.056751497358646]
We consider the problem of learning the underlying graph of a sparse Ising model with $p$ nodes from $n$ i.i.d. samples.
We leverage the cardinality constraint L0 norm, which is known to properly induce sparsity, and further combine it with an L2 norm to better model the non-zero coefficients.
arXiv Detail & Related papers (2020-12-03T07:52:20Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.