Fast & Efficient Learning of Bayesian Networks from Data: Knowledge
Discovery and Causality
- URL: http://arxiv.org/abs/2310.09222v1
- Date: Fri, 13 Oct 2023 16:20:20 GMT
- Title: Fast & Efficient Learning of Bayesian Networks from Data: Knowledge
Discovery and Causality
- Authors: Minn Sein, Fu Shunkai
- Abstract summary: Two novel algorithms, FSBN and SSBN, employ local search strategy and conditional independence tests to learn the causal network structure from data.
FSBN achieves up to 52% cost reduction, while SSBN surpasses it with a remarkable 72% reduction for a 200-node network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Structure learning is essential for Bayesian networks (BNs) as it uncovers
causal relationships, and enables knowledge discovery, predictions, inferences,
and decision-making under uncertainty. Two novel algorithms, FSBN and SSBN,
based on the PC algorithm, employ local search strategy and conditional
independence tests to learn the causal network structure from data. They
incorporate d-separation to infer additional topology information, prioritize
conditioning sets, and terminate the search immediately and efficiently. FSBN
achieves up to 52% computation cost reduction, while SSBN surpasses it with a
remarkable 72% reduction for a 200-node network. SSBN demonstrates further
efficiency gains due to its intelligent strategy. Experimental studies show
that both algorithms match the induction quality of the PC algorithm while
significantly reducing computation costs. This enables them to offer
interpretability and adaptability while reducing the computational burden,
making them valuable for various applications in big data analytics.
Related papers
- FOBNN: Fast Oblivious Binarized Neural Network Inference [12.587981899648419]
We develop a fast oblivious binarized neural network inference framework, FOBNN.
Specifically, we customize binarized convolutional neural networks to enhance oblivious inference, design two fast algorithms for binarized convolutions, and optimize network structures experimentally under constrained costs.
arXiv Detail & Related papers (2024-05-06T03:12:36Z) - Divide-and-Conquer Strategy for Large-Scale Dynamic Bayesian Network
Structure Learning [13.231953456197946]
Dynamic Bayesian Networks (DBNs) are renowned for their interpretability.
Structure learning of DBNs from data is challenging, particularly for datasets with thousands of variables.
This paper introduces a novel divide-and-conquer strategy, originally developed for static BNs, and adapts it for large-scale DBN structure learning.
arXiv Detail & Related papers (2023-12-04T09:03:06Z) - IR2Net: Information Restriction and Information Recovery for Accurate
Binary Neural Networks [24.42067007684169]
Weight and activation binarization can efficiently compress deep neural networks and accelerate model inference, but cause severe accuracy degradation.
We propose IR$2$Net to stimulate the potential of BNNs and improve the network accuracy by restricting the input information and recovering the feature information.
Experimental results demonstrate that our approach still achieves comparable accuracy even with $ sim $10x floating-point operations (FLOPs) reduction for ResNet-18.
arXiv Detail & Related papers (2022-10-06T02:03:26Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Hard and Soft EM in Bayesian Network Learning from Incomplete Data [1.5484595752241122]
We investigate the impact of using imputation instead of belief propagation on the quality of the resulting BNs.
We find that it is possible to recommend one approach over the other in several scenarios based on the characteristics of the data.
arXiv Detail & Related papers (2020-12-09T19:13:32Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - A Machine Learning Approach for Task and Resource Allocation in Mobile
Edge Computing Based Networks [108.57859531628264]
A joint task, spectrum, and transmit power allocation problem is investigated for a wireless network.
The proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.
arXiv Detail & Related papers (2020-07-20T13:46:42Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z) - Rethinking Performance Estimation in Neural Architecture Search [191.08960589460173]
We provide a novel yet systematic rethinking of performance estimation (PE) in a resource constrained regime.
By combining BPE with various search algorithms including reinforcement learning, evolution algorithm, random search, and differentiable architecture search, we achieve 1, 000x of NAS speed up with a negligible performance drop.
arXiv Detail & Related papers (2020-05-20T09:01:44Z) - Efficient Computation Reduction in Bayesian Neural Networks Through
Feature Decomposition and Memorization [10.182119276564643]
In this paper, an efficient BNN inference flow is proposed to reduce the computation cost.
About half of the computations could be eliminated compared to the traditional approach.
We implement our approach in Verilog and synthesise it with 45 $nm$ FreePDK technology.
arXiv Detail & Related papers (2020-05-08T05:03:04Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.