Parallel Sampling for Efficient High-dimensional Bayesian Network
Structure Learning
- URL: http://arxiv.org/abs/2202.09691v1
- Date: Sat, 19 Feb 2022 22:35:59 GMT
- Title: Parallel Sampling for Efficient High-dimensional Bayesian Network
Structure Learning
- Authors: Zhigao Guo, Anthony C. Constantinou
- Abstract summary: This paper describes an approximate algorithm that performs parallel sampling on Candidate Parent Sets (CPSs)
The modified algorithm, which we call Parallel Sampling MINOBS (PS-MINOBS), constructs the graph by sampling CPSs for each variable.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Score-based algorithms that learn the structure of Bayesian networks can be
used for both exact and approximate solutions. While approximate learning
scales better with the number of variables, it can be computationally expensive
in the presence of high dimensional data. This paper describes an approximate
algorithm that performs parallel sampling on Candidate Parent Sets (CPSs), and
can be viewed as an extension of MINOBS which is a state-of-the-art algorithm
for structure learning from high dimensional data. The modified algorithm,
which we call Parallel Sampling MINOBS (PS-MINOBS), constructs the graph by
sampling CPSs for each variable. Sampling is performed in parallel under the
assumption the distribution of CPSs is half-normal when ordered by Bayesian
score for each variable. Sampling from a half-normal distribution ensures that
the CPSs sampled are likely to be those which produce the higher scores.
Empirical results show that, in most cases, the proposed algorithm discovers
higher score structures than MINOBS when both algorithms are restricted to the
same runtime limit.
Related papers
- Quasi-random Multi-Sample Inference for Large Language Models [1.647759094903376]
Large language models (LLMs) are often equipped with multi-sample decoding strategies.
Traditional text generation methods, such as beam search and sampling-based techniques, have notable limitations.
This study explores the potential of arithmetic sampling, contrasting it with ancestral sampling.
arXiv Detail & Related papers (2024-11-09T18:55:04Z) - Scaling LLM Inference with Optimized Sample Compute Allocation [56.524278187351925]
We propose OSCA, an algorithm to find an optimal mix of different inference configurations.
Our experiments show that with our learned mixed allocation, we can achieve accuracy better than the best single configuration.
OSCA is also shown to be effective in agentic beyond single-turn tasks, achieving a better accuracy on SWE-Bench with 3x less compute than the default configuration.
arXiv Detail & Related papers (2024-10-29T19:17:55Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Calibrate and Debias Layer-wise Sampling for Graph Convolutional
Networks [39.56471534442315]
This paper revisits the approach from a matrix approximation perspective.
We propose a new principle for constructing sampling probabilities and an efficient debiasing algorithm.
Improvements are demonstrated by extensive analyses of estimation variance and experiments on common benchmarks.
arXiv Detail & Related papers (2022-06-01T15:52:06Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - COPS: Controlled Pruning Before Training Starts [68.8204255655161]
State-of-the-art deep neural network (DNN) pruning techniques, applied one-shot before training starts, evaluate sparse architectures with the help of a single criterion -- called pruning score.
In this work we do not concentrate on a single pruning criterion, but provide a framework for combining arbitrary GSSs to create more powerful pruning strategies.
arXiv Detail & Related papers (2021-07-27T08:48:01Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Active Structure Learning of Bayesian Networks in an Observational
Setting [21.376800678915558]
We study active structure learning of Bayesian networks in an observational setting.
We propose a new active learning algorithm that finds with a high probability a structure with a score that is $epsilon$-close to the optimal score.
We show that for a class of distributions that we term stable, a sample complexity reduction of up to a factor of $widetildeOmega(d3)$ can be obtained, where $d$ is the number of network variables.
arXiv Detail & Related papers (2021-03-25T12:50:14Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - AMP Chain Graphs: Minimal Separators and Structure Learning Algorithms [2.3333090554192615]
We address the problem of finding a minimal separator in an Andersson-Madigan-Perlman chain graph (AMP CG)
We show that the PC-like algorithm (Pena, 2012) is order-dependent, in the sense that the output can depend on the order in which the variables are given.
We propose several modifications of the PC-like algorithm that remove part or all of this order-dependence.
arXiv Detail & Related papers (2020-02-24T18:14:14Z) - Ensemble Slice Sampling: Parallel, black-box and gradient-free inference
for correlated & multimodal distributions [0.0]
Slice Sampling has emerged as a powerful Markov Chain Monte Carlo algorithm that adapts to the characteristics of the target distribution with minimal hand-tuning.
This paper introduces Ensemble Slice Sampling (ESS), a new class of algorithms that bypasses such difficulties by adaptively tuning the initial length scale.
These affine-invariant algorithms are trivial to construct, require no hand-tuning, and can easily be implemented in parallel computing environments.
arXiv Detail & Related papers (2020-02-14T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.