Hierarchical Variational Autoencoder for Visual Counterfactuals
- URL: http://arxiv.org/abs/2102.00854v1
- Date: Mon, 1 Feb 2021 14:07:11 GMT
- Title: Hierarchical Variational Autoencoder for Visual Counterfactuals
- Authors: Nicolas Vercheval, Aleksandra Pizurica
- Abstract summary: Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
- Score: 79.86967775454316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conditional Variational Auto Encoders (VAE) are gathering significant
attention as an Explainable Artificial Intelligence (XAI) tool. The codes in
the latent space provide a theoretically sound way to produce counterfactuals,
i.e. alterations resulting from an intervention on a targeted semantic feature.
To be applied on real images more complex models are needed, such as
Hierarchical CVAE. This comes with a challenge as the naive conditioning is no
longer effective. In this paper we show how relaxing the effect of the
posterior leads to successful counterfactuals and we introduce VAEX an
Hierarchical VAE designed for this approach that can visually audit a
classifier in applications.
Related papers
- Interpretable Spectral Variational AutoEncoder (ISVAE) for time series
clustering [48.0650332513417]
We introduce a novel model that incorporates an interpretable bottleneck-termed the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE)
This arrangement compels the VAE to attend on the most informative segments of the input signal.
By deliberately constraining the VAE with this FB, we promote the development of an encoding that is discernible, separable, and of reduced dimensionality.
arXiv Detail & Related papers (2023-10-18T13:06:05Z) - CR-VAE: Contrastive Regularization on Variational Autoencoders for
Preventing Posterior Collapse [1.0044057719679085]
The Variational Autoencoder (VAE) is known to suffer from the phenomenon of textitposterior collapse
We propose a novel solution, the Contrastive Regularization for Variational Autoencoders (CR-VAE)
arXiv Detail & Related papers (2023-09-06T13:05:42Z) - Collaborative Auto-encoding for Blind Image Quality Assessment [17.081262827258943]
Blind image quality assessment (BIQA) is a challenging problem with important real-world applications.
Recent efforts attempting to exploit powerful representations by deep neural networks (DNN) are hindered by the lack of subjectively annotated data.
This paper presents a novel BIQA method which overcomes this fundamental obstacle.
arXiv Detail & Related papers (2023-05-24T03:45:03Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Is Disentanglement enough? On Latent Representations for Controllable
Music Generation [78.8942067357231]
In the absence of a strong generative decoder, disentanglement does not necessarily imply controllability.
The structure of the latent space with respect to the VAE-decoder plays an important role in boosting the ability of a generative model to manipulate different attributes.
arXiv Detail & Related papers (2021-08-01T18:37:43Z) - AAVAE: Augmentation-Augmented Variational Autoencoders [43.73699420145321]
We introduce augmentation-augmented variational autoencoders (AAVAE), a third approach to self-supervised learning based on autoencoding.
We empirically evaluate the proposed AAVAE on image classification, similar to how recent contrastive and non-contrastive learning algorithms have been evaluated.
arXiv Detail & Related papers (2021-07-26T17:04:30Z) - Discrete Auto-regressive Variational Attention Models for Text Modeling [53.38382932162732]
Variational autoencoders (VAEs) have been widely applied for text modeling.
They are troubled by two challenges: information underrepresentation and posterior collapse.
We propose Discrete Auto-regressive Variational Attention Model (DAVAM) to address the challenges.
arXiv Detail & Related papers (2021-06-16T06:36:26Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.