Hierarchical Uncertainty Exploration via Feedforward Posterior Trees
- URL: http://arxiv.org/abs/2405.15719v1
- Date: Fri, 24 May 2024 17:06:51 GMT
- Title: Hierarchical Uncertainty Exploration via Feedforward Posterior Trees
- Authors: Elias Nehme, Rotem Mulayoff, Tomer Michaeli,
- Abstract summary: We introduce a new approach for visualizing posteriors across multiple levels of granularity using tree-valued predictions.
Our method predicts a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network.
- Score: 25.965665666173038
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When solving ill-posed inverse problems, one often desires to explore the space of potential solutions rather than be presented with a single plausible reconstruction. Valuable insights into these feasible solutions and their associated probabilities are embedded in the posterior distribution. However, when confronted with data of high dimensionality (such as images), visualizing this distribution becomes a formidable challenge, necessitating the application of effective summarization techniques before user examination. In this work, we introduce a new approach for visualizing posteriors across multiple levels of granularity using tree-valued predictions. Our method predicts a tree-valued hierarchical summarization of the posterior distribution for any input measurement, in a single forward pass of a neural network. We showcase the efficacy of our approach across diverse datasets and image restoration challenges, highlighting its prowess in uncertainty quantification and visualization. Our findings reveal that our method performs comparably to a baseline that hierarchically clusters samples from a diffusion-based posterior sampler, yet achieves this with orders of magnitude greater speed.
Related papers
- Tackling the Problem of Distributional Shifts: Correcting Misspecified, High-Dimensional Data-Driven Priors for Inverse Problems [39.58317527488534]
Data-driven population-level distributions are emerging as an appealing alternative to simple parametric priors in inverse problems.
It is difficult to acquire independent and identically distributed samples from the underlying data-generating process of interest to train these models.
We show that starting from a misspecified prior distribution, the updated distribution becomes progressively closer to the underlying population-level distribution.
arXiv Detail & Related papers (2024-07-24T22:39:27Z) - Reducing the cost of posterior sampling in linear inverse problems via task-dependent score learning [5.340736751238338]
We show that the evaluation of the forward mapping can be entirely bypassed during posterior sample generation.
We prove that this observation generalizes to the framework of infinite-dimensional diffusion models introduced recently.
arXiv Detail & Related papers (2024-05-24T15:33:27Z) - Uncertainty Visualization via Low-Dimensional Posterior Projections [23.371244861123827]
We introduce a new approach for estimating and visualizing posteriors by employing energy-based models (EBMs) over low-dimensional subspaces.
We demonstrate the effectiveness of our method across a diverse range of datasets and image restoration problems.
arXiv Detail & Related papers (2023-12-12T23:51:07Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Unsupervised Interpretable Basis Extraction for Concept-Based Visual
Explanations [53.973055975918655]
We show that, intermediate layer representations become more interpretable when transformed to the bases extracted with our method.
We compare the bases extracted with our method with the bases derived with a supervised approach and find that, in one aspect, the proposed unsupervised approach has a strength that constitutes a limitation of the supervised one and give potential directions for future research.
arXiv Detail & Related papers (2023-03-19T00:37:19Z) - Posterior samples of source galaxies in strong gravitational lenses with
score-based priors [107.52670032376555]
We use a score-based model to encode the prior for the inference of undistorted images of background galaxies.
We show how the balance between the likelihood and the prior meet our expectations in an experiment with out-of-distribution data.
arXiv Detail & Related papers (2022-11-07T19:00:42Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Conditional Variational Autoencoder for Learned Image Reconstruction [5.487951901731039]
We develop a novel framework that approximates the posterior distribution of the unknown image at each query observation.
It handles implicit noise models and priors, it incorporates the data formation process (i.e., the forward operator), and the learned reconstructive properties are transferable between different datasets.
arXiv Detail & Related papers (2021-10-22T10:02:48Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z) - Bayesian Deep Learning and a Probabilistic Perspective of Generalization [56.69671152009899]
We show that deep ensembles provide an effective mechanism for approximate Bayesian marginalization.
We also propose a related approach that further improves the predictive distribution by marginalizing within basins of attraction.
arXiv Detail & Related papers (2020-02-20T15:13:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.