Fast Parallel Bayesian Network Structure Learning
- URL: http://arxiv.org/abs/2212.04259v1
- Date: Thu, 8 Dec 2022 13:17:02 GMT
- Title: Fast Parallel Bayesian Network Structure Learning
- Authors: Jiantong Jiang, Zeyi Wen, Ajmal Mian
- Abstract summary: We propose a fast solution named Fast-BNS on multi-core CPUs to enhance the efficiency of the BN structure learning.
Fast-BNS is powered by a series of efficiency optimizations including grouping the CI tests of the edges with the same endpoints to reduce the number of unnecessary CI tests.
A comprehensive experimental study shows that the sequential version of Fast-BNS is up to 50 times faster than its counterpart.
- Score: 37.46185698921754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian networks (BNs) are a widely used graphical model in machine learning
for representing knowledge with uncertainty. The mainstream BN structure
learning methods require performing a large number of conditional independence
(CI) tests. The learning process is very time-consuming, especially for
high-dimensional problems, which hinders the adoption of BNs to more
applications. Existing works attempt to accelerate the learning process with
parallelism, but face issues including load unbalancing, costly atomic
operations and dominant parallel overhead. In this paper, we propose a fast
solution named Fast-BNS on multi-core CPUs to enhance the efficiency of the BN
structure learning. Fast-BNS is powered by a series of efficiency optimizations
including (i) designing a dynamic work pool to monitor the processing of edges
and to better schedule the workloads among threads, (ii) grouping the CI tests
of the edges with the same endpoints to reduce the number of unnecessary CI
tests, (iii) using a cache-friendly data storage to improve the memory
efficiency, and (iv) generating the conditioning sets on-the-fly to avoid extra
memory consumption. A comprehensive experimental study shows that the
sequential version of Fast-BNS is up to 50 times faster than its counterpart,
and the parallel version of Fast-BNS achieves 4.8 to 24.5 times speedup over
the state-of-the-art multi-threaded solution. Moreover, Fast-BNS has a good
scalability to the network size as well as sample size. Fast-BNS source code is
freely available at https://github.com/jjiantong/FastBN.
Related papers
- Benchmarking Edge AI Platforms for High-Performance ML Inference [0.0]
Edge computing's growing prominence, due to its ability to reduce communication latency and enable real-time processing, is promoting the rise of high-performance, heterogeneous System-on-Chip solutions.
While current approaches often involve scaling down modern hardware, the performance characteristics of neural network workloads can vary significantly.
We compare the latency and throughput of various linear algebra and neural network inference tasks across CPU-only, CPU/GPU, and CPU/NPU integrated solutions.
arXiv Detail & Related papers (2024-09-23T08:27:27Z) - Tensor Slicing and Optimization for Multicore NPUs [2.670309629218727]
This paper proposes a compiler optimization pass for Multicore NPUs, called Slicing Optimization (TSO)
TSO identifies the best tensor slicing that minimizes execution time for a set of CNN models.
Results show that TSO is capable of identifying the best tensor slicing that minimizes execution time for a set of CNN models.
arXiv Detail & Related papers (2023-04-06T12:03:03Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Biologically Plausible Learning on Neuromorphic Hardware Architectures [27.138481022472]
Neuromorphic computing is an emerging paradigm that confronts this imbalance by computations directly in analog memories.
This work is the first to compare the impact of different learning algorithms on Compute-In-Memory-based hardware and vice versa.
arXiv Detail & Related papers (2022-12-29T15:10:59Z) - Accelerating Barnes-Hut t-SNE Algorithm by Efficient Parallelization on
Multi-Core CPUs [59.18990342943095]
t-SNE remains one of the most popular embedding techniques for visualizing high-dimensional data.
BH t-SNE algorithm is inefficient on existing CPU implementations.
Acc-t-SNE is up to 261x and 4x faster than scikit-learn and the state-of-the-art BH t-SNE implementation from daal4py.
arXiv Detail & Related papers (2022-12-22T06:38:40Z) - Fast Parallel Exact Inference on Bayesian Networks: Poster [33.63789467363392]
We propose a fast BN exact inference solution named Fast-BNI on multi-core CPUs.
Fast-BNI enhances the efficiency of exact inference through hybrid parallelism.
We also propose techniques to further simplify the bottleneck operations of BN exact inference.
arXiv Detail & Related papers (2022-12-08T12:50:02Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Towards Memory-Efficient Neural Networks via Multi-Level in situ
Generation [10.563649948220371]
Deep neural networks (DNN) have shown superior performance in a variety of tasks.
As they rapidly evolve, their escalating computation and memory demands make it challenging to deploy them on resource-constrained edge devices.
We propose a general and unified framework to trade expensive memory transactions with ultra-fast on-chip computations.
arXiv Detail & Related papers (2021-08-25T18:50:24Z) - HANT: Hardware-Aware Network Transformation [82.54824188745887]
We propose hardware-aware network transformation (HANT)
HANT replaces inefficient operations with more efficient alternatives using a neural architecture search like approach.
Our results on accelerating the EfficientNet family show that HANT can accelerate them by up to 3.6x with 0.4% drop in the top-1 accuracy on the ImageNet dataset.
arXiv Detail & Related papers (2021-07-12T18:46:34Z) - Scaling Distributed Deep Learning Workloads beyond the Memory Capacity
with KARMA [58.040931661693925]
We propose a strategy that combines redundant recomputing and out-of-core methods.
We achieve an average of 1.52x speedup in six different models over the state-of-the-art out-of-core methods.
Our data parallel out-of-core solution can outperform complex hybrid model parallelism in training large models, e.g. Megatron-LM and Turning-NLG.
arXiv Detail & Related papers (2020-08-26T07:24:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.