EXoN: EXplainable encoder Network
- URL: http://arxiv.org/abs/2105.10867v1
- Date: Sun, 23 May 2021 07:04:30 GMT
- Title: EXoN: EXplainable encoder Network
- Authors: SeungHwan An, Jong-June Jeon, Hosik Choi
- Abstract summary: We propose a new semi-supervised learning method of Variational AutoEncoder (VAE) which yields explainable latent space by EXplainable encoder Network (EXoN)
Negative cross-entropy and Kullback-Leibler divergence play a crucial role in constructing explainable latent space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new semi-supervised learning method of Variational AutoEncoder
(VAE) which yields explainable latent space by EXplainable encoder Network
(EXoN). The EXoN provides two useful tools for implementing VAE. First, we can
freely assign a conceptual center of latent distribution for a specific label.
We separate the latent space of VAE with multi-modal property of the Gaussian
mixture distribution according to labels of observations. Next, we can easily
investigate the latent subspace by a simple statistics, known as
$F$-statistics, obtained from the EXoN. We found that both negative
cross-entropy and Kullback-Leibler divergence play a crucial role in
constructing explainable latent space and the variability of the generated
samples from our proposed model depends on a specific subspace, called
`activated latent subspace'. With MNIST and CIFAR-10 dataset, we show that the
EXoN can produce explainable latent space which effectively represents labels
and characteristics of the images.
Related papers
- Unsupervised Panoptic Interpretation of Latent Spaces in GANs Using Space-Filling Vector Quantization [9.181917968017258]
Generative adversarial networks (GANs) learn a latent space whose samples can be mapped to real-world images.
Some earlier supervised methods aim to create an interpretable latent space or discover interpretable directions.
We propose using a modification of vector quantization called space-filling vector quantization (SFVQ), which quantizes the data on a piece-wise linear curve.
arXiv Detail & Related papers (2024-10-27T19:56:02Z) - Adaptive Learning of the Latent Space of Wasserstein Generative Adversarial Networks [7.958528596692594]
We propose a novel framework called the latent Wasserstein GAN (LWGAN)
It fuses the Wasserstein auto-encoder and the Wasserstein GAN so that the intrinsic dimension of the data manifold can be adaptively learned.
We show that LWGAN is able to identify the correct intrinsic dimension under several scenarios.
arXiv Detail & Related papers (2024-09-27T01:25:22Z) - Symmetric Equilibrium Learning of VAEs [56.56929742714685]
We view variational autoencoders (VAEs) as decoder-encoder pairs, which map distributions in the data space to distributions in the latent space and vice versa.
We propose a Nash equilibrium learning approach, which is symmetric with respect to the encoder and decoder and allows learning VAEs in situations where both the data and the latent distributions are accessible only by sampling.
arXiv Detail & Related papers (2023-07-19T10:27:34Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - StyleGenes: Discrete and Efficient Latent Distributions for GANs [149.0290830305808]
We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
arXiv Detail & Related papers (2023-04-30T23:28:46Z) - Linking data separation, visual separation, and classifier performance
using pseudo-labeling by contrastive learning [125.99533416395765]
We argue that the performance of the final classifier depends on the data separation present in the latent space and visual separation present in the projection.
We demonstrate our results by the classification of five real-world challenging image datasets of human intestinal parasites with only 1% supervised samples.
arXiv Detail & Related papers (2023-02-06T10:01:38Z) - Tensor-based Multi-view Spectral Clustering via Shared Latent Space [14.470859959783995]
Multi-view Spectral Clustering (MvSC) attracts increasing attention due to diverse data sources.
New method for MvSC is proposed via a shared latent space from the Restricted Kernel Machine framework.
arXiv Detail & Related papers (2022-07-23T17:30:54Z) - Structured Uncertainty in the Observation Space of Variational
Autoencoders [20.709989481734794]
In image synthesis, sampling from such distributions produces spatially-incoherent results with uncorrelated pixel noise.
We propose an alternative model for the observation space, encoding spatial dependencies via a low-rank parameterisation.
In contrast to pixel-wise independent distributions, our samples seem to contain semantically meaningful variations from the mean allowing the prediction of multiple plausible outputs.
arXiv Detail & Related papers (2022-05-25T07:12:50Z) - Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction [27.020835928724775]
This work proposes a new computational framework for learning an explicit generative model for real-world datasets.
In particular, we propose to learn em a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space.
Our experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation.
arXiv Detail & Related papers (2021-11-12T10:06:08Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.