Classify and Generate: Using Classification Latent Space Representations
for Image Generations
- URL: http://arxiv.org/abs/2004.07543v2
- Date: Tue, 14 Dec 2021 08:00:11 GMT
- Title: Classify and Generate: Using Classification Latent Space Representations
for Image Generations
- Authors: Saisubramaniam Gopalakrishnan, Pranshu Ranjan Singh, Yasin Yazici,
Chuan-Sheng Foo, Vijay Chandrasekhar, ArulMurugan Ambikapathi
- Abstract summary: We propose a discriminative modeling framework that employs manipulated supervised latent representations to reconstruct and generate new samples belonging to a given class.
ReGene has higher classification accuracy than existing conditional generative models while being competitive in terms of FID.
- Score: 17.184760662429834
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Utilization of classification latent space information for downstream
reconstruction and generation is an intriguing and a relatively unexplored
area. In general, discriminative representations are rich in class-specific
features but are too sparse for reconstruction, whereas, in autoencoders the
representations are dense but have limited indistinguishable class-specific
features, making them less suitable for classification. In this work, we
propose a discriminative modeling framework that employs manipulated supervised
latent representations to reconstruct and generate new samples belonging to a
given class. Unlike generative modeling approaches such as GANs and VAEs that
aim to model the data manifold distribution, Representation based Generations
(ReGene) directly represent the given data manifold in the classification
space. Such supervised representations, under certain constraints, allow for
reconstructions and controlled generations using an appropriate decoder without
enforcing any prior distribution. Theoretically, given a class, we show that
these representations when smartly manipulated using convex combinations retain
the same class label. Furthermore, they also lead to the novel generation of
visually realistic images. Extensive experiments on datasets of varying
resolutions demonstrate that ReGene has higher classification accuracy than
existing conditional generative models while being competitive in terms of FID.
Related papers
- Accurate Explanation Model for Image Classifiers using Class Association Embedding [5.378105759529487]
We propose a generative explanation model that combines the advantages of global and local knowledge.
Class association embedding (CAE) encodes each sample into a pair of separated class-associated and individual codes.
Building-block coherency feature extraction algorithm is proposed that efficiently separates class-associated features from individual ones.
arXiv Detail & Related papers (2024-06-12T07:41:00Z) - Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Generative Multi-modal Models are Good Class-Incremental Learners [51.5648732517187]
We propose a novel generative multi-modal model (GMM) framework for class-incremental learning.
Our approach directly generates labels for images using an adapted generative model.
Under the Few-shot CIL setting, we have improved by at least 14% accuracy over all the current state-of-the-art methods with significantly less forgetting.
arXiv Detail & Related papers (2024-03-27T09:21:07Z) - Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models [68.73086826874733]
We introduce a novel Referring Diffusional segmentor (Ref-Diff) for referring image segmentation.
We demonstrate that without a proposal generator, a generative model alone can achieve comparable performance to existing SOTA weakly-supervised models.
This indicates that generative models are also beneficial for this task and can complement discriminative models for better referring segmentation.
arXiv Detail & Related papers (2023-08-31T14:55:30Z) - Generative Prompt Model for Weakly Supervised Object Localization [108.79255454746189]
We propose a generative prompt model (GenPromp) to localize less discriminative object parts.
During training, GenPromp converts image category labels to learnable prompt embeddings which are fed to a generative model.
Experiments on CUB-200-2011 and ILSVRC show that GenPromp respectively outperforms the best discriminative models.
arXiv Detail & Related papers (2023-07-19T05:40:38Z) - Diffusion Models Beat GANs on Image Classification [37.70821298392606]
Diffusion models have risen to prominence as a state-of-the-art method for image generation, denoising, inpainting, super-resolution, manipulation, etc.
We present our findings that these embeddings are useful beyond the noise prediction task, as they contain discriminative information and can also be leveraged for classification.
We find that with careful feature selection and pooling, diffusion models outperform comparable generative-discriminative methods for classification tasks.
arXiv Detail & Related papers (2023-07-17T17:59:40Z) - Neural Representations Reveal Distinct Modes of Class Fitting in
Residual Convolutional Networks [5.1271832547387115]
We leverage probabilistic models of neural representations to investigate how residual networks fit classes.
We find that classes in the investigated models are not fitted in an uniform way.
We show that the uncovered structure in neural representations correlate with robustness of training examples and adversarial memorization.
arXiv Detail & Related papers (2022-12-01T18:55:58Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.