Global Sensitivity Analysis of Uncertain Parameters in Bayesian Networks
- URL: http://arxiv.org/abs/2406.05764v1
- Date: Sun, 9 Jun 2024 12:36:38 GMT
- Title: Global Sensitivity Analysis of Uncertain Parameters in Bayesian Networks
- Authors: Rafael Ballester-Ripoll, Manuele Leonelli,
- Abstract summary: We propose to conduct global variance-based sensitivity analysis of $n$ parameters.
Our method works by encoding the uncertainties as $n$ additional variables of the network.
Last, we apply the method of Sobol to the resulting network to obtain $n$ global sensitivity indices.
- Score: 4.404496835736175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditionally, the sensitivity analysis of a Bayesian network studies the impact of individually modifying the entries of its conditional probability tables in a one-at-a-time (OAT) fashion. However, this approach fails to give a comprehensive account of each inputs' relevance, since simultaneous perturbations in two or more parameters often entail higher-order effects that cannot be captured by an OAT analysis. We propose to conduct global variance-based sensitivity analysis instead, whereby $n$ parameters are viewed as uncertain at once and their importance is assessed jointly. Our method works by encoding the uncertainties as $n$ additional variables of the network. To prevent the curse of dimensionality while adding these dimensions, we use low-rank tensor decomposition to break down the new potentials into smaller factors. Last, we apply the method of Sobol to the resulting network to obtain $n$ global sensitivity indices. Using a benchmark array of both expert-elicited and learned Bayesian networks, we demonstrate that the Sobol indices can significantly differ from the OAT indices, thus revealing the true influence of uncertain parameters and their interactions.
Related papers
- A new paradigm for global sensitivity analysis [0.0]
Current theory of global sensitivity analysis is limited in scope-for instance, the analysis is limited to the output's variance.
It is shown that these important problems are solved all at once by adopting a new paradigm.
arXiv Detail & Related papers (2024-09-10T07:20:51Z) - The diameter of a stochastic matrix: A new measure for sensitivity analysis in Bayesian networks [1.2699007098398807]
We argue that robustness methods based on the familiar total variation distance provide simple and more valuable bounds on robustness to misspecification.
We introduce a novel measure of dependence in conditional probability tables called the diameter to derive such bounds.
arXiv Detail & Related papers (2024-07-05T17:22:12Z) - Generalization Guarantees of Gradient Descent for Multi-Layer Neural
Networks [55.86300309474023]
We conduct a comprehensive stability and generalization analysis of gradient descent (GD) for multi-layer NNs.
We derive the excess risk rate of $O(1/sqrtn)$ for GD algorithms in both two-layer and three-layer NNs.
arXiv Detail & Related papers (2023-05-26T12:51:38Z) - The YODO algorithm: An efficient computational framework for sensitivity
analysis in Bayesian networks [5.33024001730262]
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass.
Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions.
arXiv Detail & Related papers (2023-02-01T10:47:31Z) - You Only Derive Once (YODO): Automatic Differentiation for Efficient
Sensitivity Analysis in Bayesian Networks [5.33024001730262]
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass.
An implementation of the methods using the popular machine learning library PyTorch is freely available.
arXiv Detail & Related papers (2022-06-17T11:11:19Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - On the Pitfalls of Heteroscedastic Uncertainty Estimation with
Probabilistic Neural Networks [23.502721524477444]
We present a synthetic example illustrating how this approach can lead to very poor but stable estimates.
We identify the culprit to be the log-likelihood loss, along with certain conditions that exacerbate the issue.
We present an alternative formulation, termed $beta$-NLL, in which each data point's contribution to the loss is weighted by the $beta$-exponentiated variance estimate.
arXiv Detail & Related papers (2022-03-17T08:46:17Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Network Moments: Extensions and Sparse-Smooth Attacks [59.24080620535988]
We derive exact analytic expressions for the first and second moments of a small piecewise linear (PL) network (Affine, ReLU, Affine) subject to Gaussian input.
We show that the new variance expression can be efficiently approximated leading to much tighter variance estimates.
arXiv Detail & Related papers (2020-06-21T11:36:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.