Diversity in deep generative models and generative AI
- URL: http://arxiv.org/abs/2202.09573v3
- Date: Thu, 5 Oct 2023 13:32:57 GMT
- Title: Diversity in deep generative models and generative AI
- Authors: Gabriel Turinici
- Abstract summary: We introduce a kernel-based measure quantization method that can produce new objects from a given target measure by approximating it as a whole.
This ensures a better diversity of the produced objects.
The method is tested on classic machine learning benchmarks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The decoder-based machine learning generative algorithms such as Generative
Adversarial Networks (GAN), Variational Auto-Encoders (VAE), Transformers show
impressive results when constructing objects similar to those in a training
ensemble. However, the generation of new objects builds mainly on the
understanding of the hidden structure of the training dataset followed by a
sampling from a multi-dimensional normal variable. In particular each sample is
independent from the others and can repeatedly propose same type of objects. To
cure this drawback we introduce a kernel-based measure quantization method that
can produce new objects from a given target measure by approximating it as a
whole and even staying away from elements already drawn from that distribution.
This ensures a better diversity of the produced objects. The method is tested
on classic machine learning benchmarks.
Related papers
- LOGen: Toward Lidar Object Generation by Point Diffusion [10.002129602976085]
A common strategy to improve lidar segmentation results on rare semantic classes consists of pasting objects from one lidar scene into another.
In this work, we explore how to enhance instance diversity using a lidar object generator.
arXiv Detail & Related papers (2024-12-10T10:30:27Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - Algorithms that get old : the case of generative algorithms [0.0]
Generative IA networks produce new objects each time when asked to do so.
This behavior is unlike that of human artists that change their style as times go by and seldom return to the initial point.
We propose a numerical paradigm, to be used in conjunction with a generative algorithm, that satisfies the two following requirements: the objects created do not repeat and evolve to fill the entire target probability measure.
arXiv Detail & Related papers (2022-02-07T08:55:37Z) - Latent-Insensitive Autoencoders for Anomaly Detection and
Class-Incremental Learning [0.0]
We introduce Latent-Insensitive Autoencoder (LIS-AE) where unlabeled data from a similar domain is utilized as negative examples to shape the latent layer (bottleneck) of a regular autoencoder.
We treat class-incremental learning as multiple anomaly detection tasks by adding a different latent layer for each class and use other available classes in task as negative examples to shape each latent layer.
arXiv Detail & Related papers (2021-10-25T16:53:49Z) - Hierarchical Few-Shot Generative Models [18.216729811514718]
We study a latent variables approach that extends the Neural Statistician to a fully hierarchical approach with an attention-based point to set-level aggregation.
Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime.
arXiv Detail & Related papers (2021-10-23T19:19:39Z) - Multi-Facet Clustering Variational Autoencoders [9.150555507030083]
High-dimensional data, such as images, typically feature multiple interesting characteristics one could cluster over.
We introduce Multi-Facet Clustering Variational Autoencoders (MFCVAE)
MFCVAE learns multiple clusterings simultaneously, and is trained fully unsupervised and end-to-end.
arXiv Detail & Related papers (2021-06-09T17:36:38Z) - MOGAN: Morphologic-structure-aware Generative Learning from a Single
Image [59.59698650663925]
Recently proposed generative models complete training based on only one image.
We introduce a MOrphologic-structure-aware Generative Adversarial Network named MOGAN that produces random samples with diverse appearances.
Our approach focuses on internal features including the maintenance of rational structures and variation on appearance.
arXiv Detail & Related papers (2021-03-04T12:45:23Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.