BigBraveBN: algorithm of structural learning for bayesian networks with
a large number of nodes
- URL: http://arxiv.org/abs/2208.10312v1
- Date: Mon, 22 Aug 2022 13:43:57 GMT
- Title: BigBraveBN: algorithm of structural learning for bayesian networks with
a large number of nodes
- Authors: Yury Kaminsky, Irina Deeva
- Abstract summary: The article presents a BigBraveBN algorithm for learning large Bayesian Networks with a high number of nodes (over 100)
The algorithm utilizes the Brave coefficient that measures the mutual occurrence of instances in several groups.
In the experimental part of the article, we compare the performance of BigBraveBN to other existing solutions on multiple data sets both discrete and continuous.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a Bayesian network is an NP-hard problem and with an increase in the
number of nodes, classical algorithms for learning the structure of Bayesian
networks become inefficient. In recent years, some methods and algorithms for
learning Bayesian networks with a high number of nodes (more than 50) were
developed. But these solutions have their disadvantages, for instance, they
only operate one type of data (discrete or continuous) or their algorithm has
been created to meet a specific nature of data (medical, social, etc.). The
article presents a BigBraveBN algorithm for learning large Bayesian Networks
with a high number of nodes (over 100). The algorithm utilizes the Brave
coefficient that measures the mutual occurrence of instances in several groups.
To form these groups, we use the method of nearest neighbours based on the
Mutual information (MI) measure. In the experimental part of the article, we
compare the performance of BigBraveBN to other existing solutions on multiple
data sets both discrete and continuous. The experimental part also represents
tests on real data. The aforementioned experimental results demonstrate the
efficiency of the BigBraveBN algorithm in structure learning of Bayesian
Networks.
Related papers
- Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Scalable Batch Acquisition for Deep Bayesian Active Learning [70.68403899432198]
In deep active learning, it is important to choose multiple examples to markup at each step.
Existing solutions to this problem, such as BatchBALD, have significant limitations in selecting a large number of examples.
We present the Large BatchBALD algorithm, which aims to achieve comparable quality while being more computationally efficient.
arXiv Detail & Related papers (2023-01-13T11:45:17Z) - A Comprehensively Improved Hybrid Algorithm for Learning Bayesian
Networks: Multiple Compound Memory Erasing [0.0]
This paper presents a new hybrid algorithm, MCME (multiple compound memory erasing)
MCME retains the advantages of the first two methods, solves the shortcomings of the above CI tests, and makes innovations in the scoring function in the direction discrimination stage.
A large number of experiments show that MCME has better or similar performance than some existing algorithms.
arXiv Detail & Related papers (2022-12-05T12:52:07Z) - Reconstructing Bayesian Networks on a Quantum Annealer [0.0]
O'Gorman et al. have proposed an algorithm to encode this task, but they have not provided an experimental evaluation of it.
We present (i) an implementation in Python of O'Gorman's algorithm, (ii) a divide et impera approach that allows addressing BNSL problems of larger sizes.
Results have shown the effectiveness of O'Gorman's formulation for BNSL instances of small sizes, and the superiority of the divide et impera approach on the direct execution of O'Gorman's algorithm.
arXiv Detail & Related papers (2022-04-07T15:53:05Z) - Hybrid Bayesian network discovery with latent variables by scoring
multiple interventions [5.994412766684843]
We present the hybrid mFGS-BS (majority rule and Fast Greedy equivalence Search with Bayesian Scoring) algorithm for structure learning from discrete data.
The algorithm assumes causal insufficiency in the presence of latent variables and produces a Partial Ancestral Graph (PAG)
Experimental results show that mFGS-BS improves structure learning accuracy relative to the state-of-the-art and it is computationally efficient.
arXiv Detail & Related papers (2021-12-20T14:54:41Z) - Learning to Detect Critical Nodes in Sparse Graphs via Feature Importance Awareness [53.351863569314794]
The critical node problem (CNP) aims to find a set of critical nodes from a network whose deletion maximally degrades the pairwise connectivity of the residual network.
This work proposes a feature importance-aware graph attention network for node representation.
It combines it with dueling double deep Q-network to create an end-to-end algorithm to solve CNP for the first time.
arXiv Detail & Related papers (2021-12-03T14:23:05Z) - A Sparse Structure Learning Algorithm for Bayesian Network
Identification from Discrete High-Dimensional Data [0.40611352512781856]
This paper addresses the problem of learning a sparse structure Bayesian network from high-dimensional discrete data.
We propose a score function that satisfies the sparsity and the DAG property simultaneously.
Specifically, we use a variance reducing method in our optimization algorithm to make the algorithm work efficiently in high-dimensional data.
arXiv Detail & Related papers (2021-08-21T12:21:01Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Hard and Soft EM in Bayesian Network Learning from Incomplete Data [1.5484595752241122]
We investigate the impact of using imputation instead of belief propagation on the quality of the resulting BNs.
We find that it is possible to recommend one approach over the other in several scenarios based on the characteristics of the data.
arXiv Detail & Related papers (2020-12-09T19:13:32Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.