Integrating Transformations in Probabilistic Circuits
- URL: http://arxiv.org/abs/2310.04354v1
- Date: Fri, 6 Oct 2023 16:23:09 GMT
- Title: Integrating Transformations in Probabilistic Circuits
- Authors: Tom Schierenbeck, Vladimir Vutov, Thorsten Dickhaus, Michael Beetz
- Abstract summary: We motivate that independent component analysis is a sound tool to preserve the independence properties of probabilistic circuits.
Our approach is an extension of joint probability trees, which are model-free deterministic circuits.
- Score: 7.227900307480352
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study addresses the predictive limitation of probabilistic circuits and
introduces transformations as a remedy to overcome it. We demonstrate this
limitation in robotic scenarios. We motivate that independent component
analysis is a sound tool to preserve the independence properties of
probabilistic circuits. Our approach is an extension of joint probability
trees, which are model-free deterministic circuits. By doing so, it is
demonstrated that the proposed approach is able to achieve higher likelihoods
while using fewer parameters compared to the joint probability trees on seven
benchmark data sets as well as on real robot data. Furthermore, we discuss how
to integrate transformations into tree-based learning routines. Finally, we
argue that exact inference with transformed quantile parameterized
distributions is not tractable. However, our approach allows for efficient
sampling and approximate inference.
Related papers
- Unsupervised Representation Learning from Sparse Transformation Analysis [79.94858534887801]
We propose to learn representations from sequence data by factorizing the transformations of the latent variables into sparse components.
Input data are first encoded as distributions of latent activations and subsequently transformed using a probability flow model.
arXiv Detail & Related papers (2024-10-07T23:53:25Z) - Learning Correspondence Uncertainty via Differentiable Nonlinear Least
Squares [47.83169780113135]
We propose a differentiable nonlinear least squares framework to account for uncertainty in relative pose estimation from feature correspondences.
We evaluate our approach on synthetic, as well as the KITTI and EuRoC real-world datasets.
arXiv Detail & Related papers (2023-05-16T15:21:09Z) - Efficient Sensitivity Analysis for Parametric Robust Markov Chains [23.870902923521335]
We provide a novel method for sensitivity analysis of robust Markov chains.
We measure sensitivity in terms of partial derivatives with respect to the uncertain transition probabilities.
We embed the results within an iterative learning scheme that profits from having access to a dedicated sensitivity analysis.
arXiv Detail & Related papers (2023-05-01T08:23:55Z) - Amortised inference of fractional Brownian motion with linear
computational complexity [0.0]
We introduce a simulation-based, amortised Bayesian inference scheme to infer the parameters of random walks.
Our approach learns the posterior distribution of the walks' parameters with a likelihood-free method.
We adapt this scheme to show that a finite decorrelation time in the environment can furthermore be inferred from individual trajectories.
arXiv Detail & Related papers (2022-03-15T14:43:16Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Handling Epistemic and Aleatory Uncertainties in Probabilistic Circuits [18.740781076082044]
We propose an approach to overcome the independence assumption behind most of the approaches dealing with a large class of probabilistic reasoning.
We provide an algorithm for Bayesian learning from sparse, albeit complete, observations.
Each leaf of such circuits is labelled with a beta-distributed random variable that provides us with an elegant framework for representing uncertain probabilities.
arXiv Detail & Related papers (2021-02-22T10:03:15Z) - DiffPrune: Neural Network Pruning with Deterministic Approximate Binary
Gates and $L_0$ Regularization [0.0]
Modern neural network architectures typically have many millions of parameters and can be pruned significantly without substantial loss in effectiveness.
The contribution of this work is two-fold.
The first is a method for approximating a multivariate Bernoulli random variable by means of a deterministic and differentiable transformation of any real-valued random variable.
The second is a method for model selection by element-wise parameters with approximate binary gates that may be computed deterministically or multiplicationally and take on exact zero values.
arXiv Detail & Related papers (2020-12-07T13:08:56Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.