Evaluating a Synthetic Image Dataset Generated with Stable Diffusion
- URL: http://arxiv.org/abs/2211.01777v2
- Date: Fri, 4 Nov 2022 09:28:00 GMT
- Title: Evaluating a Synthetic Image Dataset Generated with Stable Diffusion
- Authors: Andreas St\"ockl
- Abstract summary: This synthetic image database can be used as training data for data augmentation in machine learning applications.
Analyses show that Stable Diffusion can produce correct images for a large number of concepts, but also a large variety of different representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We generate synthetic images with the "Stable Diffusion" image generation
model using the Wordnet taxonomy and the definitions of concepts it contains.
This synthetic image database can be used as training data for data
augmentation in machine learning applications, and it is used to investigate
the capabilities of the Stable Diffusion model.
Analyses show that Stable Diffusion can produce correct images for a large
number of concepts, but also a large variety of different representations. The
results show differences depending on the test concepts considered and problems
with very specific concepts. These evaluations were performed using a vision
transformer model for image classification.
Related papers
- Unleashing the Potential of Synthetic Images: A Study on Histopathology Image Classification [0.12499537119440242]
Histopathology image classification is crucial for the accurate identification and diagnosis of various diseases.
We show that synthetic images can effectively augment existing datasets, ultimately improving the performance of the downstream histopathology image classification task.
arXiv Detail & Related papers (2024-09-24T12:02:55Z) - Comparative Analysis of Generative Models: Enhancing Image Synthesis with VAEs, GANs, and Stable Diffusion [0.0]
This paper examines three major generative modelling frameworks: Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs) and Stable Diffusion models.
arXiv Detail & Related papers (2024-08-16T13:50:50Z) - Visual Car Brand Classification by Implementing a Synthetic Image Dataset Creation Pipeline [3.524869467682149]
We propose an automatic pipeline for generating synthetic image datasets using Stable Diffusion.
We leverage YOLOv8 for automatic bounding box detection and quality assessment of synthesized images.
arXiv Detail & Related papers (2024-06-03T07:44:08Z) - ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object [78.58860252442045]
We introduce generative model as a data source for hard images that benchmark deep models' robustness.
We are able to generate images with more diversified backgrounds, textures, and materials than any prior work, where we term this benchmark as ImageNet-D.
Our work suggests that diffusion models can be an effective source to test vision models.
arXiv Detail & Related papers (2024-03-27T17:23:39Z) - A Phase Transition in Diffusion Models Reveals the Hierarchical Nature
of Data [55.748186000425996]
Recent advancements show that diffusion models can generate high-quality images.
We study this phenomenon in a hierarchical generative model of data.
Our analysis characterises the relationship between time and scale in diffusion models.
arXiv Detail & Related papers (2024-02-26T19:52:33Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Scaling Laws of Synthetic Images for Model Training ... for Now [54.43596959598466]
We study the scaling laws of synthetic images generated by state of the art text-to-image models.
We observe that synthetic images demonstrate a scaling trend similar to, but slightly less effective than, real images in CLIP training.
arXiv Detail & Related papers (2023-12-07T18:59:59Z) - Stable Diffusion for Data Augmentation in COCO and Weed Datasets [5.81198182644659]
This study utilized seven common categories and three widespread weed species to evaluate the efficiency of a stable diffusion model.
Three techniques (i.e., Image-to-image translation, Dreambooth, and ControlNet) based on stable diffusion were leveraged for image generation with different focuses.
Then, classification and detection tasks were conducted based on these synthetic images, whose performance was compared to the models trained on original images.
arXiv Detail & Related papers (2023-12-07T02:23:32Z) - Diversify, Don't Fine-Tune: Scaling Up Visual Recognition Training with
Synthetic Images [37.29348016920314]
We present a new framework leveraging off-the-shelf generative models to generate synthetic training images.
We address class name ambiguity, lack of diversity in naive prompts, and domain shifts.
Our framework consistently enhances recognition model performance with more synthetic data.
arXiv Detail & Related papers (2023-12-04T18:35:27Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - Semantic Image Synthesis via Diffusion Models [174.24523061460704]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the de facto GAN-based approaches.
We propose a novel framework based on DDPM for semantic image synthesis.
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.