Reconstructing Bayesian Networks on a Quantum Annealer
- URL: http://arxiv.org/abs/2204.03526v1
- Date: Thu, 7 Apr 2022 15:53:05 GMT
- Title: Reconstructing Bayesian Networks on a Quantum Annealer
- Authors: Enrico Zardini, Massimo Rizzoli, Sebastiano Dissegna, Enrico
Blanzieri, Davide Pastorello
- Abstract summary: O'Gorman et al. have proposed an algorithm to encode this task, but they have not provided an experimental evaluation of it.
We present (i) an implementation in Python of O'Gorman's algorithm, (ii) a divide et impera approach that allows addressing BNSL problems of larger sizes.
Results have shown the effectiveness of O'Gorman's formulation for BNSL instances of small sizes, and the superiority of the divide et impera approach on the direct execution of O'Gorman's algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bayesian networks are widely used probabilistic graphical models, whose
structure is hard to learn starting from the generated data. O'Gorman et al.
have proposed an algorithm to encode this task, i.e., the Bayesian network
structure learning (BSNL), into a form that can be solved through quantum
annealing, but they have not provided an experimental evaluation of it. In this
paper, we present (i) an implementation in Python of O'Gorman's algorithm, (ii)
a divide et impera approach that allows addressing BNSL problems of larger
sizes in order to overcome the limitations imposed by the current
architectures, and (iii) their empirical evaluation. Specifically, several
problems with an increasing number of variables have been used in the
experiments. The results have shown the effectiveness of O'Gorman's formulation
for BNSL instances of small sizes, and the superiority of the divide et impera
approach on the direct execution of O'Gorman's algorithm.
Related papers
- The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - BigBraveBN: algorithm of structural learning for bayesian networks with
a large number of nodes [0.0]
The article presents a BigBraveBN algorithm for learning large Bayesian Networks with a high number of nodes (over 100)
The algorithm utilizes the Brave coefficient that measures the mutual occurrence of instances in several groups.
In the experimental part of the article, we compare the performance of BigBraveBN to other existing solutions on multiple data sets both discrete and continuous.
arXiv Detail & Related papers (2022-08-22T13:43:57Z) - Quantum Approximate Optimization Algorithm for Bayesian network
structure learning [1.332091725929965]
In this work, a specific type of variational quantum algorithm, the quantum approximate optimization algorithm, was used to solve the Bayesian network structure learning problem.
Results showed that the quantum approximate optimization algorithm approach offers competitive results with state-of-the-art methods and quantitative resilience to quantum noise.
arXiv Detail & Related papers (2022-03-04T16:11:34Z) - A Sparse Structure Learning Algorithm for Bayesian Network
Identification from Discrete High-Dimensional Data [0.40611352512781856]
This paper addresses the problem of learning a sparse structure Bayesian network from high-dimensional discrete data.
We propose a score function that satisfies the sparsity and the DAG property simultaneously.
Specifically, we use a variance reducing method in our optimization algorithm to make the algorithm work efficiently in high-dimensional data.
arXiv Detail & Related papers (2021-08-21T12:21:01Z) - Improved Acyclicity Reasoning for Bayesian Network Structure Learning
with Constraint Programming [0.0]
Learning the structure of a Bayesian network (BNSL) from discrete data is known to be an NP-hard task.
In this work, we propose a new time algorithm for discovering a subset of all possible cluster cuts.
We show that, despite being suboptimal, they improve performance by orders of magnitude.
The resulting solver also compares favourably with GOBNILP, a state-of-the-art solver for the BNSL problem.
arXiv Detail & Related papers (2021-06-23T09:46:11Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - An Empirical Study of Derivative-Free-Optimization Algorithms for
Targeted Black-Box Attacks in Deep Neural Networks [8.368543987898732]
This paper considers four pre-existing state-of-the-art DFO-based algorithms along with the introduction of a new algorithm built on BOBYQA.
We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify.
Experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack.
arXiv Detail & Related papers (2020-12-03T13:32:20Z) - Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra [53.46106569419296]
We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression.
We argue that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise.
arXiv Detail & Related papers (2020-11-09T01:13:07Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.