Random Network Distillation as a Diversity Metric for Both Image and
Text Generation
- URL: http://arxiv.org/abs/2010.06715v1
- Date: Tue, 13 Oct 2020 22:03:52 GMT
- Title: Random Network Distillation as a Diversity Metric for Both Image and
Text Generation
- Authors: Liam Fowl, Micah Goldblum, Arjun Gupta, Amr Sharaf, Tom Goldstein
- Abstract summary: We develop a new diversity metric that can be applied to data, both synthetic and natural, of any type.
We validate and deploy this metric on both images and text.
- Score: 62.13444904851029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models are increasingly able to produce remarkably high quality
images and text. The community has developed numerous evaluation metrics for
comparing generative models. However, these metrics do not effectively quantify
data diversity. We develop a new diversity metric that can readily be applied
to data, both synthetic and natural, of any type. Our method employs random
network distillation, a technique introduced in reinforcement learning. We
validate and deploy this metric on both images and text. We further explore
diversity in few-shot image generation, a setting which was previously
difficult to evaluate.
Related papers
- GRADE: Quantifying Sample Diversity in Text-to-Image Models [66.12068246962762]
We propose GRADE: Granular Attribute Diversity Evaluation, an automatic method for quantifying sample diversity.
We measure the overall diversity of 12 T2I models using 400 concept-attribute pairs, revealing that all models display limited variation.
Our work proposes a modern, semantically-driven approach to measure sample diversity and highlights the stunning homogeneity in outputs by T2I models.
arXiv Detail & Related papers (2024-10-29T23:10:28Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Measuring Diversity in Co-creative Image Generation [1.4963011898406866]
We propose an alternative based on entropy of neural network encodings for comparing diversity between sets of images.
We also compare two pre-trained networks and show how the choice relates to the notion of diversity that we want to evaluate.
arXiv Detail & Related papers (2024-03-06T01:55:14Z) - Instruct-Imagen: Image Generation with Multi-modal Instruction [90.04481955523514]
instruct-imagen is a model that tackles heterogeneous image generation tasks and generalizes across unseen tasks.
We introduce *multi-modal instruction* for image generation, a task representation articulating a range of generation intents with precision.
Human evaluation on various image generation datasets reveals that instruct-imagen matches or surpasses prior task-specific models in-domain.
arXiv Detail & Related papers (2024-01-03T19:31:58Z) - Diverse Diffusion: Enhancing Image Diversity in Text-to-Image Generation [0.0]
We introduce Diverse Diffusion, a method for boosting image diversity beyond gender and ethnicity.
Our approach contributes to the creation of more inclusive and representative AI-generated art.
arXiv Detail & Related papers (2023-10-19T08:48:23Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Effective Data Augmentation With Diffusion Models [65.09758931804478]
We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models.
Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples.
We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
arXiv Detail & Related papers (2023-02-07T20:42:28Z) - Rarity Score : A New Metric to Evaluate the Uncommonness of Synthesized
Images [32.94581354719927]
We propose a new evaluation metric, called rarity score', to measure the individual rarity of each image.
Code will be publicly available online for the research community.
arXiv Detail & Related papers (2022-06-17T05:16:16Z) - Implicit Data Augmentation Using Feature Interpolation for Diversified
Low-Shot Image Generation [11.4559888429977]
Training of generative models can easily diverge in low-data setting.
We propose a novel implicit data augmentation approach which facilitates stable training and synthesize diverse samples.
arXiv Detail & Related papers (2021-12-04T23:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.