The Dual PC Algorithm and the Role of Gaussianity for Structure Learning
of Bayesian Networks
- URL: http://arxiv.org/abs/2112.09036v6
- Date: Tue, 27 Jun 2023 07:54:07 GMT
- Title: The Dual PC Algorithm and the Role of Gaussianity for Structure Learning
of Bayesian Networks
- Authors: Enrico Giudice, Jack Kuipers, Giusi Moffa
- Abstract summary: We show that the dual PC algorithm outperforms the classic PC algorithm in terms of run time and in recovering the underlying network structure.
We also show that the dual PC algorithm applies for Gaussian copula models, and demonstrate its performance in that setting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the graphical structure of Bayesian networks is key to describing
data-generating mechanisms in many complex applications but poses considerable
computational challenges. Observational data can only identify the equivalence
class of the directed acyclic graph underlying a Bayesian network model, and a
variety of methods exist to tackle the problem. Under certain assumptions, the
popular PC algorithm can consistently recover the correct equivalence class by
reverse-engineering the conditional independence (CI) relationships holding in
the variable distribution. The dual PC algorithm is a novel scheme to carry out
the CI tests within the PC algorithm by leveraging the inverse relationship
between covariance and precision matrices. By exploiting block matrix
inversions we can also perform tests on partial correlations of complementary
(or dual) conditioning sets. The multiple CI tests of the dual PC algorithm
proceed by first considering marginal and full-order CI relationships and
progressively moving to central-order ones. Simulation studies show that the
dual PC algorithm outperforms the classic PC algorithm both in terms of run
time and in recovering the underlying network structure, even in the presence
of deviations from Gaussianity. Additionally, we show that the dual PC
algorithm applies for Gaussian copula models, and demonstrate its performance
in that setting.
Related papers
- Divide-and-Conquer Predictive Coding: a structured Bayesian inference algorithm [11.722226132995978]
We introduce a novel predictive coding algorithm for structured generative models, that we call divide-and-conquer predictive coding (D CPC)
D CPC performs maximum-likelihood updates of model parameters without sacrificing biological plausibility.
Empirically, DCPC achieves better numerical performance than competing algorithms and provides accurate inference in a number of problems not previously addressed with predictive coding.
arXiv Detail & Related papers (2024-08-11T17:29:03Z) - Discrete Neural Algorithmic Reasoning [18.497863598167257]
We propose to force neural reasoners to maintain the execution trajectory as a combination of finite predefined states.
trained with supervision on the algorithm's state transitions, such models are able to perfectly align with the original algorithm.
arXiv Detail & Related papers (2024-02-18T16:03:04Z) - Energy-based learning algorithms for analog computing: a comparative
study [2.0937431058291933]
Energy-based learning algorithms have recently gained a surge of interest due to their compatibility with analog hardware.
We compare seven learning algorithms, namely contrastive learning (CL), equilibrium propagation (EP) and coupled learning (CpL)
We find that negative perturbations are better than positive ones, and highlight the centered variant of EP as the best-performing algorithm.
arXiv Detail & Related papers (2023-12-22T22:49:58Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Detection of Anomalies in Multivariate Time Series Using Ensemble
Techniques [3.2422067155309806]
We propose an ensemble technique that combines multiple base models toward the final decision.
A semi-supervised approach using a Logistic Regressor to combine the base models' outputs is also proposed.
The performance improvement in terms of anomaly detection accuracy reaches 2% for the unsupervised and at least 10% for the semi-supervised models.
arXiv Detail & Related papers (2023-08-06T17:51:22Z) - Curvature-Sensitive Predictive Coding with Approximate Laplace Monte
Carlo [1.1470070927586016]
Predictive coding (PC) accounts of perception now form one of the dominant computational theories of the brain.
Despite this, they have enjoyed little export to the broader field of machine learning.
This has been due to the poor performance of models trained with PC when evaluated by both sample quality and marginal likelihood.
arXiv Detail & Related papers (2023-03-09T01:29:58Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.