You Only Derive Once (YODO): Automatic Differentiation for Efficient
Sensitivity Analysis in Bayesian Networks
- URL: http://arxiv.org/abs/2206.08687v1
- Date: Fri, 17 Jun 2022 11:11:19 GMT
- Title: You Only Derive Once (YODO): Automatic Differentiation for Efficient
Sensitivity Analysis in Bayesian Networks
- Authors: Rafael Ballester-Ripoll, Manuele Leonelli
- Abstract summary: Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass.
An implementation of the methods using the popular machine learning library PyTorch is freely available.
- Score: 5.33024001730262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensitivity analysis measures the influence of a Bayesian network's
parameters on a quantity of interest defined by the network, such as the
probability of a variable taking a specific value. In particular, the so-called
sensitivity value measures the quantity of interest's partial derivative with
respect to the network's conditional probabilities. However, finding such
values in large networks with thousands of parameters can become
computationally very expensive. We propose to use automatic differentiation
combined with exact inference to obtain all sensitivity values in a single
pass. Our method first marginalizes the whole network once using e.g. variable
elimination and then backpropagates this operation to obtain the gradient with
respect to all input parameters. We demonstrate our routines by ranking all
parameters by importance on a Bayesian network modeling humanitarian crises and
disasters, and then show the method's efficiency by scaling it to huge networks
with up to 100'000 parameters. An implementation of the methods using the
popular machine learning library PyTorch is freely available.
Related papers
- Global Sensitivity Analysis of Uncertain Parameters in Bayesian Networks [4.404496835736175]
We propose to conduct global variance-based sensitivity analysis of $n$ parameters.
Our method works by encoding the uncertainties as $n$ additional variables of the network.
Last, we apply the method of Sobol to the resulting network to obtain $n$ global sensitivity indices.
arXiv Detail & Related papers (2024-06-09T12:36:38Z) - A Note on Bayesian Networks with Latent Root Variables [56.86503578982023]
We show that the marginal distribution over the remaining, manifest, variables also factorises as a Bayesian network, which we call empirical.
A dataset of observations of the manifest variables allows us to quantify the parameters of the empirical Bayesian net.
arXiv Detail & Related papers (2024-02-26T23:53:34Z) - Efficient Sensitivity Analysis for Parametric Robust Markov Chains [23.870902923521335]
We provide a novel method for sensitivity analysis of robust Markov chains.
We measure sensitivity in terms of partial derivatives with respect to the uncertain transition probabilities.
We embed the results within an iterative learning scheme that profits from having access to a dedicated sensitivity analysis.
arXiv Detail & Related papers (2023-05-01T08:23:55Z) - The YODO algorithm: An efficient computational framework for sensitivity
analysis in Bayesian networks [5.33024001730262]
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass.
Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions.
arXiv Detail & Related papers (2023-02-01T10:47:31Z) - A Directed-Evolution Method for Sparsification and Compression of Neural
Networks with Application to Object Identification and Segmentation and
considerations of optimal quantization using small number of bits [0.0]
This work introduces Directed-Evolution method for sparsification of neural networks.
The relevance of parameters to the network accuracy is directly assessed.
The parameters that produce the least effect on accuracy when tentatively zeroed are indeed zeroed.
arXiv Detail & Related papers (2022-06-12T23:49:08Z) - Effective Sparsification of Neural Networks with Global Sparsity
Constraint [45.640862235500165]
Weight pruning is an effective technique to reduce the model size and inference time for deep neural networks in real-world deployments.
Existing methods rely on either manual tuning or handcrafted rules to find appropriate pruning rates individually for each layer.
We propose an effective network sparsification method called it probabilistic masking (ProbMask) which solves a natural sparsification formulation under global sparsity constraint.
arXiv Detail & Related papers (2021-05-03T14:13:42Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - Function approximation by deep neural networks with parameters $\{0,\pm
\frac{1}{2}, \pm 1, 2\}$ [91.3755431537592]
It is shown that $C_beta$-smooth functions can be approximated by neural networks with parameters $0,pm frac12, pm 1, 2$.
The depth, width and the number of active parameters of constructed networks have, up to a logarithimc factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$.
arXiv Detail & Related papers (2021-03-15T19:10:02Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Operation-Aware Soft Channel Pruning using Differentiable Masks [51.04085547997066]
We propose a data-driven algorithm, which compresses deep neural networks in a differentiable way by exploiting the characteristics of operations.
We perform extensive experiments and achieve outstanding performance in terms of the accuracy of output networks.
arXiv Detail & Related papers (2020-07-08T07:44:00Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.