Self-supervised Enhancement of Latent Discovery in GANs
- URL: http://arxiv.org/abs/2112.08835v1
- Date: Thu, 16 Dec 2021 12:36:40 GMT
- Title: Self-supervised Enhancement of Latent Discovery in GANs
- Authors: Silpa Vadakkeeveetil Sreelatha, Adarsh Kappiyath, S Sumitra
- Abstract summary: We propose Scale Ranking Estimator (SRE) which is trained using self-supervision.
SRE enhances the disentanglement in directions obtained by existing unsupervised disentanglement techniques.
We also show that the learned SRE can be used to perform Attribute-based image retrieval task without further training.
- Score: 2.277447144331876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several methods for discovering interpretable directions in the latent space
of pre-trained GANs have been proposed. Latent semantics discovered by
unsupervised methods are relatively less disentangled than supervised methods
since they do not use pre-trained attribute classifiers. We propose Scale
Ranking Estimator (SRE), which is trained using self-supervision. SRE enhances
the disentanglement in directions obtained by existing unsupervised
disentanglement techniques. These directions are updated to preserve the
ordering of variation within each direction in latent space. Qualitative and
quantitative evaluation of the discovered directions demonstrates that our
proposed method significantly improves disentanglement in various datasets. We
also show that the learned SRE can be used to perform Attribute-based image
retrieval task without further training.
Related papers
- No Regrets: Investigating and Improving Regret Approximations for Curriculum Discovery [53.08822154199948]
Unsupervised Environment Design (UED) methods have gained recent attention as their adaptive curricula promise to enable agents to be robust to in- and out-of-distribution tasks.
This work investigates how existing UED methods select training environments, focusing on task prioritisation metrics.
We develop a method that directly trains on scenarios with high learnability.
arXiv Detail & Related papers (2024-08-27T14:31:54Z) - Unsupervised Discovery of Interpretable Directions in h-space of
Pre-trained Diffusion Models [63.1637853118899]
We propose the first unsupervised and learning-based method to identify interpretable directions in h-space of pre-trained diffusion models.
We employ a shift control module that works on h-space of pre-trained diffusion models to manipulate a sample into a shifted version of itself.
By jointly optimizing them, the model will spontaneously discover disentangled and interpretable directions.
arXiv Detail & Related papers (2023-10-15T18:44:30Z) - Surveillance Evasion Through Bayesian Reinforcement Learning [78.79938727251594]
We consider a 2D continuous path planning problem with a completely unknown intensity of random termination.
Those Observers' surveillance intensity is a priori unknown and has to be learned through repetitive path planning.
arXiv Detail & Related papers (2021-09-30T02:29:21Z) - LARGE: Latent-Based Regression through GAN Semantics [42.50535188836529]
We propose a novel method for solving regression tasks using few-shot or weak supervision.
We show that our method can be applied across a wide range of domains, leverage multiple latent direction discovery frameworks, and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-07-22T17:55:35Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - LatentCLR: A Contrastive Learning Approach for Unsupervised Discovery of
Interpretable Directions [0.02294014185517203]
We propose a contrastive-learning-based approach for discovering semantic directions in the latent space of pretrained GANs.
Our approach finds semantically meaningful dimensions compatible with state-of-the-art methods.
arXiv Detail & Related papers (2021-04-02T00:11:22Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Guided Variational Autoencoder for Disentanglement Learning [79.02010588207416]
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.
We design an unsupervised strategy and a supervised strategy in Guided-VAE and observe enhanced modeling and controlling capability over the vanilla VAE.
arXiv Detail & Related papers (2020-04-02T20:49:15Z) - Unsupervised Discovery of Interpretable Directions in the GAN Latent
Space [39.54530450932134]
latent spaces of GAN models often have semantically meaningful directions.
We introduce an unsupervised method to identify interpretable directions in the latent space of a pretrained GAN model.
We show how to exploit this finding to achieve competitive performance for weakly-supervised saliency detection.
arXiv Detail & Related papers (2020-02-10T13:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.