Divide-and-Conquer Strategy for Large-Scale Dynamic Bayesian Network
Structure Learning
- URL: http://arxiv.org/abs/2312.01739v1
- Date: Mon, 4 Dec 2023 09:03:06 GMT
- Title: Divide-and-Conquer Strategy for Large-Scale Dynamic Bayesian Network
Structure Learning
- Authors: Hui Ouyang, Cheng Chen, Ke Tang
- Abstract summary: Dynamic Bayesian Networks (DBNs) are renowned for their interpretability.
Structure learning of DBNs from data is challenging, particularly for datasets with thousands of variables.
This paper introduces a novel divide-and-conquer strategy, originally developed for static BNs, and adapts it for large-scale DBN structure learning.
- Score: 13.231953456197946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Bayesian Networks (DBNs), renowned for their interpretability, have
become increasingly vital in representing complex stochastic processes in
various domains such as gene expression analysis, healthcare, and traffic
prediction. Structure learning of DBNs from data is challenging, particularly
for datasets with thousands of variables. Most current algorithms for DBN
structure learning are adaptations from those used in static Bayesian Networks
(BNs), and are typically focused on small-scale problems. In order to solve
large-scale problems while taking full advantage of existing algorithms, this
paper introduces a novel divide-and-conquer strategy, originally developed for
static BNs, and adapts it for large-scale DBN structure learning. In this work,
we specifically concentrate on 2 Time-sliced Bayesian Networks (2-TBNs), a
special class of DBNs. Furthermore, we leverage the prior knowledge of 2-TBNs
to enhance the performance of the strategy we introduce. Our approach
significantly improves the scalability and accuracy of 2-TBN structure
learning. Experimental results demonstrate the effectiveness of our method,
showing substantial improvements over existing algorithms in both computational
efficiency and structure learning accuracy. On problem instances with more than
1,000 variables, our approach improves two accuracy metrics by 74.45% and
110.94% on average , respectively, while reducing runtime by 93.65% on average.
Related papers
- Applying Incremental Learning in Binary-Addition-Tree Algorithm for Dynamic Binary-State Network Reliability [0.08158530638728499]
The Binary-Addition-Tree algorithm (BAT) is a powerful implicit enumeration method for solving network reliability and optimization problems.
By introducing incremental learning, we enable the BAT to adapt and improve its performance iteratively as it encounters new data or network changes.
arXiv Detail & Related papers (2024-09-24T04:13:03Z) - Learning the Finer Things: Bayesian Structure Learning at the
Instantiation Level [0.0]
Successful machine learning methods require a trade-off between memorization and generalization.
We present a novel probabilistic graphical model structure learning approach that can learn, generalize and explain in elusive domains.
arXiv Detail & Related papers (2023-03-08T02:31:49Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - BigBraveBN: algorithm of structural learning for bayesian networks with
a large number of nodes [0.0]
The article presents a BigBraveBN algorithm for learning large Bayesian Networks with a high number of nodes (over 100)
The algorithm utilizes the Brave coefficient that measures the mutual occurrence of instances in several groups.
In the experimental part of the article, we compare the performance of BigBraveBN to other existing solutions on multiple data sets both discrete and continuous.
arXiv Detail & Related papers (2022-08-22T13:43:57Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - A Sparse Structure Learning Algorithm for Bayesian Network
Identification from Discrete High-Dimensional Data [0.40611352512781856]
This paper addresses the problem of learning a sparse structure Bayesian network from high-dimensional discrete data.
We propose a score function that satisfies the sparsity and the DAG property simultaneously.
Specifically, we use a variance reducing method in our optimization algorithm to make the algorithm work efficiently in high-dimensional data.
arXiv Detail & Related papers (2021-08-21T12:21:01Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Efficient and Scalable Structure Learning for Bayesian Networks:
Algorithms and Applications [33.8980356362084]
We propose a new structure learning algorithm, LEAST, which comprehensively fulfills our business requirements.
LEAST is deployed and serves for more than 20 applications with thousands of executions per day.
We show that LEAST unlocks the possibility of applying BN structure learning in new areas, such as large-scale gene expression data analysis and explainable recommendation system.
arXiv Detail & Related papers (2020-12-07T09:11:08Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.