Greedy equivalence search for nonparametric graphical models
- URL: http://arxiv.org/abs/2406.17228v1
- Date: Tue, 25 Jun 2024 02:31:32 GMT
- Title: Greedy equivalence search for nonparametric graphical models
- Authors: Bryon Aragam,
- Abstract summary: GES is known to consistently estimate the structure of directed acyclic graph (DAG) models.
A general theory that covers general nonparametric DAG models, however, is missing.
Here, we establish the consistency of greedy equivalence search for general families of DAG models that satisfy smoothness conditions on the Markov factorization.
- Score: 13.153623397411605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the hallmark achievements of the theory of graphical models and Bayesian model selection is the celebrated greedy equivalence search (GES) algorithm due to Chickering and Meek. GES is known to consistently estimate the structure of directed acyclic graph (DAG) models in various special cases including Gaussian and discrete models, which are in particular curved exponential families. A general theory that covers general nonparametric DAG models, however, is missing. Here, we establish the consistency of greedy equivalence search for general families of DAG models that satisfy smoothness conditions on the Markov factorization, and hence may not be curved exponential families, or even parametric. The proof leverages recent advances in nonparametric Bayes to construct a test for comparing misspecified DAG models that avoids arguments based on the Laplace approximation. Nonetheless, when the Laplace approximation is valid and a consistent scoring function exists, we recover the classical result. As a result, we obtain a general consistency theorem for GES applied to general DAG models.
Related papers
- Generation is better than Modification: Combating High Class Homophily Variance in Graph Anomaly Detection [51.11833609431406]
Homophily distribution differences between different classes are significantly greater than those in homophilic and heterophilic graphs.
We introduce a new metric called Class Homophily Variance, which quantitatively describes this phenomenon.
To mitigate its impact, we propose a novel GNN model named Homophily Edge Generation Graph Neural Network (HedGe)
arXiv Detail & Related papers (2024-03-15T14:26:53Z) - Inconsistency of cross-validation for structure learning in Gaussian
graphical models [20.332261273013913]
Cross-validation to discern the structure of a Gaussian graphical model is a challenging endeavor.
We provide finite-sample bounds on the probability that the Lasso estimator for the neighborhood of a node misidentifies the neighborhood.
We conduct an empirical investigation of this inconsistency by contrasting our outcomes with other commonly used information criteria.
arXiv Detail & Related papers (2023-12-28T14:47:28Z) - BayesDAG: Gradient-Based Posterior Inference for Causal Discovery [30.027520859604955]
We introduce a scalable causal discovery framework based on a combination of Markov Chain Monte Carlo and Variational Inference.
Our approach directly samples DAGs from the posterior without requiring any DAG regularization.
We derive a novel equivalence to the permutation-based DAG learning, which opens up possibilities of using any relaxed estimator defined over permutations.
arXiv Detail & Related papers (2023-07-26T02:34:13Z) - From Gradient Flow on Population Loss to Learning with Stochastic
Gradient Descent [50.4531316289086]
Gradient Descent (SGD) has been the method of choice for learning large-scale non-root models.
An overarching paper is providing general conditions SGD converges, assuming that GF on the population loss converges.
We provide a unified analysis for GD/SGD not only for classical settings like convex losses, but also for more complex problems including Retrieval Matrix sq-root.
arXiv Detail & Related papers (2022-10-13T03:55:04Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Sequential Learning of the Topological Ordering for the Linear
Non-Gaussian Acyclic Model with Parametric Noise [6.866717993664787]
We develop a novel sequential approach to estimate the causal ordering of a DAG.
We provide extensive numerical evidence to demonstrate that our procedure is scalable to cases with possibly thousands of nodes.
arXiv Detail & Related papers (2022-02-03T18:15:48Z) - BCD Nets: Scalable Variational Approaches for Bayesian Causal Discovery [97.79015388276483]
A structural equation model (SEM) is an effective framework to reason over causal relationships represented via a directed acyclic graph (DAG)
Recent advances enabled effective maximum-likelihood point estimation of DAGs from observational data.
We propose BCD Nets, a variational framework for estimating a distribution over DAGs characterizing a linear-Gaussian SEM.
arXiv Detail & Related papers (2021-12-06T03:35:21Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - T-LoHo: A Bayesian Regularization Model for Structured Sparsity and
Smoothness on Graphs [0.0]
In graph-structured data, structured sparsity and smoothness tend to cluster together.
We propose a new prior for high dimensional parameters with graphical relations.
We use it to detect structured sparsity and smoothness simultaneously.
arXiv Detail & Related papers (2021-07-06T10:10:03Z) - Understanding Overparameterization in Generative Adversarial Networks [56.57403335510056]
Generative Adversarial Networks (GANs) are used to train non- concave mini-max optimization problems.
A theory has shown the importance of the gradient descent (GD) to globally optimal solutions.
We show that in an overized GAN with a $1$-layer neural network generator and a linear discriminator, the GDA converges to a global saddle point of the underlying non- concave min-max problem.
arXiv Detail & Related papers (2021-04-12T16:23:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.