Fine-Tuning the Odds in Bayesian Networks
- URL: http://arxiv.org/abs/2105.14371v1
- Date: Sat, 29 May 2021 20:41:56 GMT
- Title: Fine-Tuning the Odds in Bayesian Networks
- Authors: Bahare Salmani and Joost-Pieter Katoen
- Abstract summary: This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables.
The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Markov chains.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes various new analysis techniques for Bayes networks in
which conditional probability tables (CPTs) may contain symbolic variables. The
key idea is to exploit scalable and powerful techniques for synthesis problems
in parametric Markov chains. Our techniques are applicable to arbitrarily many,
possibly dependent parameters that may occur in various CPTs. This lifts the
severe restrictions on parameters, e.g., by restricting the number of
parametrized CPTs to one or two, or by avoiding parameter dependencies between
several CPTs, in existing works for parametric Bayes networks (pBNs). We
describe how our techniques can be used for various pBN synthesis problems
studied in the literature such as computing sensitivity functions (and values),
simple and difference parameter tuning, ratio parameter tuning, and minimal
change tuning. Experiments on several benchmarks show that our prototypical
tool built on top of the probabilistic model checker Storm can handle several
hundreds of parameters.
Related papers
- Structural Refinement of Bayesian Networks for Efficient Model Parameterisation [0.0]
We provide a review of a variety of structural refinement methods that can be used in practice to efficiently approximate a conditional probability table.<n>We evaluate each method through a worked example on a Bayesian network model of cardiovascular risk assessment.
arXiv Detail & Related papers (2025-09-30T22:39:48Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof [50.49582712378289]
We investigate the impact of neural parameter symmetries by introducing new neural network architectures.
We develop two methods, with some provable guarantees, of modifying standard neural networks to reduce parameter space symmetries.
Our experiments reveal several interesting observations on the empirical impact of parameter symmetries.
arXiv Detail & Related papers (2024-05-30T16:32:31Z) - Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model [81.55141188169621]
We equip PEFT with a cross-block orchestration mechanism to enable the adaptation of the Segment Anything Model (SAM) to various downstream scenarios.
We propose an intra-block enhancement module, which introduces a linear projection head whose weights are generated from a hyper-complex layer.
Our proposed approach consistently improves the segmentation performance significantly on novel scenarios with only around 1K additional parameters.
arXiv Detail & Related papers (2023-11-28T11:23:34Z) - Tractable Bounding of Counterfactual Queries by Knowledge Compilation [51.47174989680976]
We discuss the problem of bounding partially identifiable queries, such as counterfactuals, in Pearlian structural causal models.
A recently proposed iterated EM scheme yields an inner approximation of those bounds by sampling the initialisation parameters.
We show how a single symbolic knowledge compilation allows us to obtain the circuit structure with symbolic parameters to be replaced by their actual values.
arXiv Detail & Related papers (2023-10-05T07:10:40Z) - Finding an $\epsilon$-close Variation of Parameters in Bayesian Networks [0.0]
We find a minimal $epsilon$-close amendment of probability entries in a given set of conditional probability tables.
Based on the state-of-the-art "region verification" techniques for parametric Markov chains, we propose an algorithm whose capabilities go beyond any existing techniques.
arXiv Detail & Related papers (2023-05-17T08:46:53Z) - Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning [91.5113227694443]
We propose a novel visual.
sensuous-aware fine-Tuning (SPT) scheme.
SPT allocates trainable parameters to task-specific important positions.
Experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods.
arXiv Detail & Related papers (2023-03-15T12:34:24Z) - Symmetric Convolutional Filters: A Novel Way to Constrain Parameters in
CNN [0.0]
We propose a novel technique to constrain parameters in CNN based on symmetric filters.
We demonstrate that our models offer effective generalisation and a structured elimination of redundancy in parameters.
arXiv Detail & Related papers (2022-02-26T09:45:30Z) - Towards a Unified View of Parameter-Efficient Transfer Learning [108.94786930869473]
Fine-tuning large pre-trained language models on downstream tasks has become the de-facto learning paradigm in NLP.
Recent work has proposed a variety of parameter-efficient transfer learning methods that only fine-tune a small number of (extra) parameters to attain strong performance.
We break down the design of state-of-the-art parameter-efficient transfer learning methods and present a unified framework that establishes connections between them.
arXiv Detail & Related papers (2021-10-08T20:22:26Z) - Network insensitivity to parameter noise via adversarial regularization [0.0]
We present a new adversarial network optimisation algorithm that attacks network parameters during training.
We show that our approach produces models that are more robust to targeted parameter variation.
Our work provides an approach to deploy neural network architectures to inference devices that suffer from computational non-idealities.
arXiv Detail & Related papers (2021-06-09T12:11:55Z) - One from many: Estimating a function of many parameters [0.0]
Many parameters of a process are unknown; estimate a specific linear combination of these parameters without having the ability to control any of the parameters.
Geometric reasoning demonstrates the conditions, necessary and sufficient, for saturating the fundamental and attainable quantum-process bound.
arXiv Detail & Related papers (2020-02-07T16:55:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.