Finding an $\epsilon$-close Variation of Parameters in Bayesian Networks
- URL: http://arxiv.org/abs/2305.10051v1
- Date: Wed, 17 May 2023 08:46:53 GMT
- Title: Finding an $\epsilon$-close Variation of Parameters in Bayesian Networks
- Authors: Bahare Salmani and Joost-Pieter Katoen
- Abstract summary: We find a minimal $epsilon$-close amendment of probability entries in a given set of conditional probability tables.
Based on the state-of-the-art "region verification" techniques for parametric Markov chains, we propose an algorithm whose capabilities go beyond any existing techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the $\epsilon$-close parameter tuning problem for
Bayesian Networks (BNs): find a minimal $\epsilon$-close amendment of
probability entries in a given set of (rows in) conditional probability tables
that make a given quantitative constraint on the BN valid. Based on the
state-of-the-art "region verification" techniques for parametric Markov chains,
we propose an algorithm whose capabilities go beyond any existing techniques.
Our experiments show that $\epsilon$-close tuning of large BN benchmarks with
up to 8 parameters is feasible. In particular, by allowing (i) varied
parameters in multiple CPTs and (ii) inter-CPT parameter dependencies, we treat
subclasses of parametric BNs that have received scant attention so far.
Related papers
- A Probabilistic Basis for Low-Rank Matrix Learning [2.1485350418225244]
We study the distribution with density $f(X)propto e-lambdaVert XVert_*$, finding many of its fundamental attributes to be analytically tractable via differential geometry.<n>We use these facts to design an improved MCMC algorithm for low rank Bayesian inference as well as to learn the penalty parameter $lambda$.
arXiv Detail & Related papers (2025-10-06T23:26:56Z) - Parameter-free Algorithms for the Stochastically Extended Adversarial Model [59.81852138768642]
Existing approaches for the Extended Adversarial (SEA) model require prior knowledge of problem-specific parameters, such as the diameter of the domain $D$ and the Lipschitz constant of the loss functions $G$.<n>We develop parameter-free methods by leveraging the Optimistic Online Newton Step (OONS) algorithm to eliminate the need for these parameters.
arXiv Detail & Related papers (2025-10-06T10:53:37Z) - Structural Refinement of Bayesian Networks for Efficient Model Parameterisation [0.0]
We provide a review of a variety of structural refinement methods that can be used in practice to efficiently approximate a conditional probability table.<n>We evaluate each method through a worked example on a Bayesian network model of cardiovascular risk assessment.
arXiv Detail & Related papers (2025-09-30T22:39:48Z) - Beyond likelihood ratio bias: Nested multi-time-scale stochastic approximation for likelihood-free parameter estimation [49.78792404811239]
We study inference in simulation-based models where the analytical form of the likelihood is unknown.<n>We use a ratio-free nested multi-time-scale approximation (SA) method that simultaneously tracks the score and drives the parameter update.<n>We show that our algorithm can eliminate the original bias $Obig(sqrtfrac1Nbig)$ and accelerate the convergence rate from $Obig(beta_k+sqrtfracalpha_kNbig)$.
arXiv Detail & Related papers (2024-11-20T02:46:15Z) - Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature
and Bayesian Optimization [51.533164528799084]
We show that to estimate the normalizing constant within a small relative error, the level of difficulty depends on the value of $lambda$.
We find that this pattern holds true even when the function evaluations are noisy.
arXiv Detail & Related papers (2024-01-11T07:45:09Z) - Parametric Constraints for Bayesian Knowledge Tracing from First
Principles [0.276240219662896]
This paper takes a "from first principles" approach to deriving constraints that can be imposed on the BKT parameter space.
The paper further introduces a novel algorithm for estimating BKT parameters subject to the newly defined constraints.
arXiv Detail & Related papers (2023-12-23T03:58:41Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Approximate inference of marginals using the IBIA framework [0.0]
Exact inference of marginals in probabilistic graphical models (PGM) is known to be intractable.
We propose a new algorithm for marginal inference that is based on the incremental build-infer-approximate (IBIA) paradigm.
Our method gives either better or comparable accuracy than existing variational and sampling based methods, with smaller runtimes.
arXiv Detail & Related papers (2023-06-01T04:24:21Z) - Patch-aware Batch Normalization for Improving Cross-domain Robustness [55.06956781674986]
Cross-domain tasks present a challenge in which the model's performance will degrade when the training set and the test set follow different distributions.
We propose a novel method called patch-aware batch normalization (PBN)
By exploiting the differences between local patches of an image, our proposed PBN can effectively enhance the robustness of the model's parameters.
arXiv Detail & Related papers (2023-04-06T03:25:42Z) - Bayesian Kernelized Tensor Factorization as Surrogate for Bayesian
Optimization [13.896697187967545]
Kernel optimization (BO) primarily uses Gaussian processes (GP) as the key surrogate model.
In this paper, we propose to use Bayesian Factorization (BKTF) as a new surrogate model -- for BO in a $D$-dimensional product space.
BKTF offers a flexible and highly effective approach for characterizing complex functions with uncertainty quantification.
arXiv Detail & Related papers (2023-02-28T12:00:21Z) - Sample-Then-Optimize Batch Neural Thompson Sampling [50.800944138278474]
We introduce two algorithms for black-box optimization based on the Thompson sampling (TS) policy.
To choose an input query, we only need to train an NN and then choose the query by maximizing the trained NN.
Our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy.
arXiv Detail & Related papers (2022-10-13T09:01:58Z) - Neural Inverse Kinematics [72.85330210991508]
Inverse kinematic (IK) methods recover the parameters of the joints, given the desired position of selected elements in the kinematic chain.
We propose a neural IK method that employs the hierarchical structure of the problem to sequentially sample valid joint angles conditioned on the desired position.
arXiv Detail & Related papers (2022-05-22T14:44:07Z) - Fine-Tuning the Odds in Bayesian Networks [0.0]
This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables.
The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Markov chains.
arXiv Detail & Related papers (2021-05-29T20:41:56Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - On Batch Normalisation for Approximate Bayesian Inference [102.94525205971873]
We show that batch-normalisation does not affect the optimum of the evidence lower bound (ELBO)
We also study the Monte Carlo Batch Normalisation (MCBN) algorithm, proposed as an approximate inference technique parallel to MC Dropout.
arXiv Detail & Related papers (2020-12-24T12:40:11Z) - Bounded Regret for Finitely Parameterized Multi-Armed Bandits [3.8073142980733]
We propose an algorithm that is simple and easy to implement.
We show that the FP-UCB algorithm achieves a logarithmic regret.
We also validate the superior performance of the FP-UCB algorithm through extensive numerical simulations.
arXiv Detail & Related papers (2020-03-03T04:37:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.