Enhancing Bayesian Network Structural Learning with Monte Carlo Tree Search
- URL: http://arxiv.org/abs/2502.01527v1
- Date: Mon, 03 Feb 2025 17:08:21 GMT
- Title: Enhancing Bayesian Network Structural Learning with Monte Carlo Tree Search
- Authors: Jorge D. Laborda, Pablo Torrijos, José M. Puerta, José A. Gámez,
- Abstract summary: This article presents an adaptation of the Monte Carlo Tree Search (MCTS) algorithm for the structural learning of Bayesian Networks (BNs)
Initially designed for game tree exploration, MCTS has been repurposed to address the challenge of learning BN structures.
We adopt a semi-randomized approach to address this challenge by incorporating variable orders obtained from other search algorithms such as Greedy Equivalent Search (GES), PC, or HC itself.
- Score: 0.6574756524825567
- License:
- Abstract: This article presents MCTS-BN, an adaptation of the Monte Carlo Tree Search (MCTS) algorithm for the structural learning of Bayesian Networks (BNs). Initially designed for game tree exploration, MCTS has been repurposed to address the challenge of learning BN structures by exploring the search space of potential ancestral orders in Bayesian Networks. Then, it employs Hill Climbing (HC) to derive a Bayesian Network structure from each order. In large BNs, where the search space for variable orders becomes vast, using completely random orders during the rollout phase is often unreliable and impractical. We adopt a semi-randomized approach to address this challenge by incorporating variable orders obtained from other heuristic search algorithms such as Greedy Equivalent Search (GES), PC, or HC itself. This hybrid strategy mitigates the computational burden and enhances the reliability of the rollout process. Experimental evaluations demonstrate the effectiveness of MCTS-BN in improving BNs generated by traditional structural learning algorithms, exhibiting robust performance even when base algorithm orders are suboptimal and surpassing the gold standard when provided with favorable orders.
Related papers
- RethinkMCTS: Refining Erroneous Thoughts in Monte Carlo Tree Search for Code Generation [65.5353313491402]
We introduce RethinkMCTS, which employs the Monte Carlo Tree Search (MCTS) algorithm to conduct thought-level searches before generating code.
We construct verbal feedback from fine-turbo code execution feedback to refine erroneous thoughts during the search.
We demonstrate that RethinkMCTS outperforms previous search-based and feedback-based code generation baselines.
arXiv Detail & Related papers (2024-09-15T02:07:28Z) - LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - Scale-Adaptive Balancing of Exploration and Exploitation in Classical Planning [1.6574413179773757]
We show that a more detailed theoretical understanding of MAB literature helps improve existing planning algorithms.
We propose GreedyUCT-Normal, a MCTS/THTS algorithm with UCB1-Normal bandit for agile classical planning.
arXiv Detail & Related papers (2023-05-16T22:46:37Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - AutoBERT-Zero: Evolving BERT Backbone from Scratch [94.89102524181986]
We propose an Operation-Priority Neural Architecture Search (OP-NAS) algorithm to automatically search for promising hybrid backbone architectures.
We optimize both the search algorithm and evaluation of candidate models to boost the efficiency of our proposed OP-NAS.
Experiments show that the searched architecture (named AutoBERT-Zero) significantly outperforms BERT and its variants of different model capacities in various downstream tasks.
arXiv Detail & Related papers (2021-07-15T16:46:01Z) - Landmark Regularization: Ranking Guided Super-Net Training in Neural
Architecture Search [70.57382341642418]
Weight sharing has become a de facto standard in neural architecture search because it enables the search to be done on commodity hardware.
Recent works have empirically shown a ranking disorder between the performance of stand-alone architectures and that of the corresponding shared-weight networks.
We propose a regularization term that aims to maximize the correlation between the performance rankings of the shared-weight network and that of the standalone architectures.
arXiv Detail & Related papers (2021-04-12T09:32:33Z) - Any Part of Bayesian Network Structure Learning [17.46459748913491]
We study an interesting and challenging problem, learning any part of a Bayesian network (BN) structure.
We first present a new concept of Expand-Backtracking to explain why local BN structure learning methods have the false edge orientation problem.
We then propose APSL, an efficient and accurate Any Part of BN Structure Learning algorithm.
arXiv Detail & Related papers (2021-03-23T10:03:31Z) - Partitioned hybrid learning of Bayesian network structures [6.683105697884667]
We develop a novel hybrid method for Bayesian network structure learning called partitioned hybrid greedy search (pHGS)
Our empirical results demonstrate the superior empirical performance of pHGS against many state-of-the-art structure learning algorithms.
arXiv Detail & Related papers (2021-03-22T21:34:52Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - Differentiable TAN Structure Learning for Bayesian Network Classifiers [19.30562170076368]
We consider learning of tree-augmented naive Bayes (TAN) structures for Bayesian network classifiers with discrete input features.
Instead of performing a optimization over the space of possible graph structures, the proposed method learns a distribution over graph structures.
Our method consistently outperforms random TAN structures and Chow-Liu TAN structures.
arXiv Detail & Related papers (2020-08-21T16:22:47Z) - Turbocharging Treewidth-Bounded Bayesian Network Structure Learning [26.575053800551633]
We present a new approach for learning the structure of a treewidth-bounded Network (BN)
The key to our approach is applying an exact method (based on MaxSAT) locally to improve the score of aally computed BN.
Our experiments show that our method improves the score of BNs provided by state-of-the-art methods, often significantly.
arXiv Detail & Related papers (2020-06-24T16:13:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.