Uncertainty in latent representations of variational autoencoders optimized for visual tasks
- URL: http://arxiv.org/abs/2404.15390v2
- Date: Thu, 23 Jan 2025 19:13:33 GMT
- Title: Uncertainty in latent representations of variational autoencoders optimized for visual tasks
- Authors: Josefina Catoni, Domonkos Martos, Ferenc Csikor, Enzo Ferrante, Diego H. Milone, Balázs Meszéna, Gergő Orbán, Rodrigo Echeveste,
- Abstract summary: We investigate the properties of inference in Variational Autoencoders (VAEs)<n>We draw inspiration from classical computer vision to introduce an inductive bias into the VAE.<n>We find that restored inference capabilities are delivered by developing a motif in the inference network.
- Score: 3.9504737666460037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Generative Models (DGMs) can learn flexible latent variable representations of images while avoiding intractable computations, common in Bayesian inference. However, investigating the properties of inference in Variational Autoencoders (VAEs), a major class of DGMs, reveals severe problems in their uncertainty representations. Here we draw inspiration from classical computer vision to introduce an inductive bias into the VAE by incorporating a global explaining-away latent variable, which remedies defective inference in VAEs. Unlike standard VAEs, the Explaing-Away VAE (EA-VAE) provides uncertainty estimates that align with normative requirements across a wide spectrum of perceptual tasks, including image corruption, interpolation, and out-of-distribution detection. We find that restored inference capabilities are delivered by developing a motif in the inference network (the encoder) which is widespread in biological neural networks: divisive normalization. Our results establish EA-VAEs as reliable tools to perform inference under deep generative models with appropriate estimates of uncertainty.
Related papers
- Protect Before Generate: Error Correcting Codes within Discrete Deep Generative Models [3.053842954605396]
We introduce a novel method that enhances variational inference in discrete latent variable models.
We leverage Error Correcting Codes (ECCs) to introduce redundancy in the latent representations.
This redundancy is then exploited by the variational posterior to yield more accurate estimates.
arXiv Detail & Related papers (2024-10-10T11:59:58Z) - A Non-negative VAE:the Generalized Gamma Belief Network [49.970917207211556]
The gamma belief network (GBN) has demonstrated its potential for uncovering multi-layer interpretable latent representations in text data.
We introduce the generalized gamma belief network (Generalized GBN) in this paper, which extends the original linear generative model to a more expressive non-linear generative model.
We also propose an upward-downward Weibull inference network to approximate the posterior distribution of the latent variables.
arXiv Detail & Related papers (2024-08-06T18:18:37Z) - Visual Analysis of Prediction Uncertainty in Neural Networks for Deep Image Synthesis [3.09988520562118]
It is imperative to comprehend the quality, confidence, robustness, and uncertainty associated with their prediction.
A thorough understanding of these quantities produces actionable insights that help application scientists make informed decisions.
This contribution demonstrates how the prediction uncertainty and sensitivity of DNNs can be estimated efficiently using various methods.
arXiv Detail & Related papers (2024-05-22T20:01:31Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - DiG-IN: Diffusion Guidance for Investigating Networks -- Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual Explanations [35.458709912618176]
Deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e.g. via spurious features.
For safety-critical tasks the black-box nature of their decisions is problematic, and explanations or at least methods which make decisions plausible are needed urgently.
We address these problems by generating images that optimize a classifier-derived objective using a framework for guided image generation.
arXiv Detail & Related papers (2023-11-29T17:35:29Z) - Variational Voxel Pseudo Image Tracking [127.46919555100543]
Uncertainty estimation is an important task for critical problems, such as robotics and autonomous driving.
We propose a Variational Neural Network-based version of a Voxel Pseudo Image Tracking (VPIT) method for 3D Single Object Tracking.
arXiv Detail & Related papers (2023-02-12T13:34:50Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Regularizing Variational Autoencoder with Diversity and Uncertainty
Awareness [61.827054365139645]
Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference.
We propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space.
arXiv Detail & Related papers (2021-10-24T07:58:13Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - NestedVAE: Isolating Common Factors via Weak Supervision [45.366986365879505]
We identify the connection between the task of bias reduction and that of isolating factors common between domains.
To isolate the common factors we combine the theory of deep latent variable models with information bottleneck theory.
Two outer VAEs with shared weights attempt to reconstruct the input and infer a latent space, whilst a nested VAE attempts to reconstruct the latent representation of one image, from the latent representation of its paired image.
arXiv Detail & Related papers (2020-02-26T15:49:57Z) - A Heteroscedastic Uncertainty Model for Decoupling Sources of MRI Image
Quality [3.5480752735999417]
Quality control (QC) of medical images is essential to ensure that downstream analyses such as segmentation can be performed successfully.
We aim to automate the process by formulating a probabilistic network that estimates uncertainty through a heteroscedastic noise model.
We show models trained with simulated artefacts provide informative measures of uncertainty on real-world images and we validate our uncertainty predictions on problematic images identified by human-raters.
arXiv Detail & Related papers (2020-01-31T16:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.