GAN-based Priors for Quantifying Uncertainty
- URL: http://arxiv.org/abs/2003.12597v1
- Date: Fri, 27 Mar 2020 18:52:54 GMT
- Title: GAN-based Priors for Quantifying Uncertainty
- Authors: Dhruv V. Patel, Assad A. Oberai
- Abstract summary: We show how the approximate distribution learned by a deep generative adversarial network (GAN) may be used as a prior in a Bayesian update.
We demonstrate the efficacy of this approach on two distinct, and remarkably broad, classes of problems.
- Score: 0.6091702876917281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian inference is used extensively to quantify the uncertainty in an
inferred field given the measurement of a related field when the two are linked
by a mathematical model. Despite its many applications, Bayesian inference
faces challenges when inferring fields that have discrete representations of
large dimension, and/or have prior distributions that are difficult to
characterize mathematically. In this work we demonstrate how the approximate
distribution learned by a deep generative adversarial network (GAN) may be used
as a prior in a Bayesian update to address both these challenges. We
demonstrate the efficacy of this approach on two distinct, and remarkably
broad, classes of problems. The first class leads to supervised learning
algorithms for image classification with superior out of distribution detection
and accuracy, and for image inpainting with built-in variance estimation. The
second class leads to unsupervised learning algorithms for image denoising and
for solving physics-driven inverse problems.
Related papers
- Fast Solvers for Discrete Diffusion Models: Theory and Applications of High-Order Algorithms [31.42317398879432]
Current inference approaches mainly fall into two categories: exact simulation and approximate methods such as $tau$-leaping.
In this work, we advance the latter category by tailoring the first extension of high-order numerical inference schemes to discrete diffusion models.
We rigorously analyze the proposed schemes and establish the second-order accuracy of the $theta$-trapezoidal method in KL divergence.
arXiv Detail & Related papers (2025-02-01T00:25:21Z) - VIPaint: Image Inpainting with Pre-Trained Diffusion Models via Variational Inference [5.852077003870417]
We show that our VIPaint method significantly outperforms previous approaches in both the plausibility and diversity of imputations.
We show that our VIPaint method significantly outperforms previous approaches in both the plausibility and diversity of imputations.
arXiv Detail & Related papers (2024-11-28T05:35:36Z) - A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the broad scope of Deep Learning-based Computer Vision.
Uncertainty quantification has been extensively studied within this context, enabling the expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision-making.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Embedding Trajectory for Out-of-Distribution Detection in Mathematical Reasoning [50.84938730450622]
We propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios.
Our method can be extended to more applications with high-density features in output spaces, such as multiple-choice questions.
arXiv Detail & Related papers (2024-05-22T22:22:25Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie [13.476505672245603]
This paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with priors.
We introduce two algorithms: 1) -ULA (Unadjusted Langevin) Algorithm inference for Monte Carlo sampling and MMSE; and 2) quantitative-SGD (Stochastic Gradient Descent) for inference.
The algorithms are demonstrated on several problems such as image denoisering, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and regularity.
arXiv Detail & Related papers (2021-03-08T12:46:53Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.