The YODO algorithm: An efficient computational framework for sensitivity
analysis in Bayesian networks
- URL: http://arxiv.org/abs/2302.00364v1
- Date: Wed, 1 Feb 2023 10:47:31 GMT
- Title: The YODO algorithm: An efficient computational framework for sensitivity
analysis in Bayesian networks
- Authors: Rafael Ballester-Ripoll, Manuele Leonelli
- Abstract summary: Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose an algorithm combining automatic differentiation and exact inference to efficiently calculate the sensitivity measures in a single pass.
Our method can be used for one-way and multi-way sensitivity analysis and the derivation of admissible regions.
- Score: 5.33024001730262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensitivity analysis measures the influence of a Bayesian network's
parameters on a quantity of interest defined by the network, such as the
probability of a variable taking a specific value. Various sensitivity measures
have been defined to quantify such influence, most commonly some function of
the quantity of interest's partial derivative with respect to the network's
conditional probabilities. However, computing these measures in large networks
with thousands of parameters can become computationally very expensive. We
propose an algorithm combining automatic differentiation and exact inference to
efficiently calculate the sensitivity measures in a single pass. It first
marginalizes the whole network once, using e.g. variable elimination, and then
backpropagates this operation to obtain the gradient with respect to all input
parameters. Our method can be used for one-way and multi-way sensitivity
analysis and the derivation of admissible regions. Simulation studies highlight
the efficiency of our algorithm by scaling it to massive networks with up to
100'000 parameters and investigate the feasibility of generic multi-way
analyses. Our routines are also showcased over two medium-sized Bayesian
networks: the first modeling the country-risks of a humanitarian crisis, the
second studying the relationship between the use of technology and the
psychological effects of forced social isolation during the COVID-19 pandemic.
An implementation of the methods using the popular machine learning library
PyTorch is freely available.
Related papers
- Global Sensitivity Analysis of Uncertain Parameters in Bayesian Networks [4.404496835736175]
We propose to conduct global variance-based sensitivity analysis of $n$ parameters.
Our method works by encoding the uncertainties as $n$ additional variables of the network.
Last, we apply the method of Sobol to the resulting network to obtain $n$ global sensitivity indices.
arXiv Detail & Related papers (2024-06-09T12:36:38Z) - Sensitivity-Aware Amortized Bayesian Inference [8.753065246797561]
Sensitivity analyses reveal the influence of various modeling choices on the outcomes of statistical analyses.
We propose sensitivity-aware amortized Bayesian inference (SA-ABI), a multifaceted approach to integrate sensitivity analyses into simulation-based inference with neural networks.
We demonstrate the effectiveness of our method in applied modeling problems, ranging from disease outbreak dynamics and global warming thresholds to human decision-making.
arXiv Detail & Related papers (2023-10-17T10:14:10Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Conductivity Imaging from Internal Measurements with Mixed Least-Squares
Deep Neural Networks [4.228167013618626]
We develop a novel approach using deep neural networks to reconstruct the conductivity distribution in elliptic problems.
We provide a thorough analysis of the deep neural network approximations of the conductivity for both continuous and empirical losses.
arXiv Detail & Related papers (2023-03-29T04:43:03Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - You Only Derive Once (YODO): Automatic Differentiation for Efficient
Sensitivity Analysis in Bayesian Networks [5.33024001730262]
Sensitivity analysis measures the influence of a Bayesian network's parameters on a quantity of interest defined by the network.
We propose to use automatic differentiation combined with exact inference to obtain all sensitivity values in a single pass.
An implementation of the methods using the popular machine learning library PyTorch is freely available.
arXiv Detail & Related papers (2022-06-17T11:11:19Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.