Structural Refinement of Bayesian Networks for Efficient Model Parameterisation
- URL: http://arxiv.org/abs/2510.00334v1
- Date: Tue, 30 Sep 2025 22:39:48 GMT
- Title: Structural Refinement of Bayesian Networks for Efficient Model Parameterisation
- Authors: Kieran Drury, Martine J. Barons, Jim Q. Smith,
- Abstract summary: We provide a review of a variety of structural refinement methods that can be used in practice to efficiently approximate a conditional probability table.<n>We evaluate each method through a worked example on a Bayesian network model of cardiovascular risk assessment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many Bayesian network modelling applications suffer from the issue of data scarcity. Hence the use of expert judgement often becomes necessary to determine the parameters of the conditional probability tables (CPTs) throughout the network. There are usually a prohibitively large number of these parameters to determine, even when complementing any available data with expert judgements. To address this challenge, a number of CPT approximation methods have been developed that reduce the quantity and complexity of parameters needing to be determined to fully parameterise a Bayesian network. This paper provides a review of a variety of structural refinement methods that can be used in practice to efficiently approximate a CPT within a Bayesian network. We not only introduce and discuss the intrinsic properties and requirements of each method, but we evaluate each method through a worked example on a Bayesian network model of cardiovascular risk assessment. We conclude with practical guidance to help Bayesian network practitioners choose an alternative approach when direct parameterisation of a CPT is infeasible.
Related papers
- Cross-Learning from Scarce Data via Multi-Task Constrained Optimization [70.90607489166648]
This paper introduces a multi-task emphcross-learning framework to overcome data scarcity.<n>We formulate this joint estimation as a constrained optimization problem.<n>We show the efficiency of our cross-learning method in applications with real data including image classification and propagation of infectious diseases.
arXiv Detail & Related papers (2025-11-17T18:35:59Z) - Learning Discrete Bayesian Networks with Hierarchical Dirichlet Shrinkage [52.914168158222765]
We detail a comprehensive Bayesian framework for learning DBNs.<n>We give a novel Markov chain Monte Carlo (MCMC) algorithm utilizing parallel Langevin proposals to generate exact posterior samples.<n>We apply our methodology to uncover prognostic network structure from primary breast cancer samples.
arXiv Detail & Related papers (2025-09-16T17:24:35Z) - MEPT: Mixture of Expert Prompt Tuning as a Manifold Mapper [75.6582687942241]
We propose Mixture of Expert Prompt Tuning (MEPT) as an effective and efficient manifold-mapping framework.<n>MEPT integrates multiple prompt experts to adaptively learn diverse and non-stationary data distributions.<n> Empirical evaluations demonstrate that MEPT outperforms several state-of-the-art parameter efficient baselines on SuperGLUE.
arXiv Detail & Related papers (2025-08-31T21:19:25Z) - BAPE: Learning an Explicit Bayes Classifier for Long-tailed Visual Recognition [78.70453964041718]
Current deep learning algorithms usually solve for the optimal classifier by emphimplicitly estimating the posterior probabilities.<n>This simple methodology has been proven effective for meticulously balanced academic benchmark datasets.<n>However, it is not applicable to the long-tailed data distributions in the real world.<n>This paper presents a novel approach (BAPE) that provides a more precise theoretical estimation of the data distributions.
arXiv Detail & Related papers (2025-06-29T15:12:50Z) - Sparse Bayesian Networks: Efficient Uncertainty Quantification in Medical Image Analysis [4.898968729173388]
We introduce a training procedure for a sparse (partial) Bayesian network.
We exploit the advantages of both representations to achieve high task-specific performance and minimize predictive uncertainty.
Our approach achieves competitive performance and predictive uncertainty estimation by reducing Bayesian parameters by over 95%.
arXiv Detail & Related papers (2024-06-11T05:12:00Z) - Multiple Testing of Linear Forms for Noisy Matrix Completion [13.269597888405759]
We develop a general approach to overcome difficulties by introducing new statistics for individual tests with sharp new statistics.<n>We show that valid FDR control can be achieved with guaranteed power under nearly optimal sample size requirements.
arXiv Detail & Related papers (2023-12-01T02:53:20Z) - Provably Efficient Bayesian Optimization with Unknown Gaussian Process Hyperparameter Estimation [44.53678257757108]
We propose a new BO method that can sub-linearly converge to the objective function's global optimum.
Our method uses a multi-armed bandit technique (EXP3) to add random data points to the BO process.
We demonstrate empirically that our method outperforms existing approaches on various synthetic and real-world problems.
arXiv Detail & Related papers (2023-06-12T03:35:45Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Eryn : A multi-purpose sampler for Bayesian inference [0.0]
tt Eryn is a user-friendly and multipurpose toolbox for Bayesian inference.
In this paper, we describe this sampler package and illustrate its capabilities on a variety of use cases.
arXiv Detail & Related papers (2023-03-03T12:45:03Z) - Systematic Analysis for Pretrained Language Model Priming for Parameter-Efficient Fine-tuning [45.99877631719761]
We propose a general PE priming framework to enhance and explore the few-shot adaptation and generalization ability of PE methods.
We conduct experiments on a few-shot cross-domain benchmark containing 160 diverse NLP tasks.
arXiv Detail & Related papers (2022-12-02T08:56:53Z) - You Only Derive Once (YODO): Automatic Differentiation for Efficient
Sensitivity Analysis in Bayesian Networks [5.33024001730262]
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass.
An implementation of the methods using the popular machine learning library PyTorch is freely available.
arXiv Detail & Related papers (2022-06-17T11:11:19Z) - Fine-Tuning the Odds in Bayesian Networks [0.0]
This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables.
The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Markov chains.
arXiv Detail & Related papers (2021-05-29T20:41:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.