Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis
- URL: http://arxiv.org/abs/2211.15807v1
- Date: Mon, 28 Nov 2022 22:30:33 GMT
- Title: Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis
- Authors: Julia Dubenskaya, Alexander Kryukov, Andrey Demichev, Stanislav
Polyakov, Elizaveta Gres, Anna Vlaskina
- Abstract summary: We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
- Score: 55.41644538483948
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative adversarial networks are a promising tool for image generation in
the astronomy domain. Of particular interest are conditional generative
adversarial networks (cGANs), which allow you to divide images into several
classes according to the value of some property of the image, and then specify
the required class when generating new images. In the case of images from
Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the
total brightness of all image pixels (image size), which is in direct
correlation with the energy of primary particles. We used a cGAN technique to
generate images similar to whose obtained in the TAIGA-IACT experiment. As a
training set, we used a set of two-dimensional images generated using the TAIGA
Monte Carlo simulation software. We artificiallly divided the training set into
10 classes, sorting images by size and defining the boundaries of the classes
so that the same number of images fall into each class. These classes were used
while training our network. The paper shows that for each class, the size
distribution of the generated images is close to normal with the mean value
located approximately in the middle of the corresponding class. We also show
that for the generated images, the total image size distribution obtained by
summing the distributions over all classes is close to the original
distribution of the training set. The results obtained will be useful for more
accurate generation of realistic synthetic images similar to the ones taken by
IACTs.
Related papers
- Is Deep Learning Network Necessary for Image Generation? [9.131712404284876]
We investigate the possibility of image generation without using a deep learning network.
We validate the assumption that images follow a high-dimensional distribution.
Experiments show that our images have a lower FID value compared to those generated by variational auto-encoders.
arXiv Detail & Related papers (2023-08-25T18:14:19Z) - A Robust Approach Towards Distinguishing Natural and Computer Generated
Images using Multi-Colorspace fused and Enriched Vision Transformer [0.0]
This work proposes a robust approach towards distinguishing natural and computer generated images.
The proposed approach achieves high performance gain when compared to a set of baselines.
arXiv Detail & Related papers (2023-08-14T17:11:17Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Synthesize-It-Classifier: Learning a Generative Classifier through
RecurrentSelf-analysis [9.029985847202667]
We show the generative capability of an image classifier network by synthesizing high-resolution, photo-realistic, and diverse images at scale.
The overall methodology, called Synthesize-It-Classifier (STIC), does not require an explicit generator network to estimate the density of the data distribution.
We demonstrate an Attentive-STIC network that shows an iterative drawing of synthesized images on the ImageNet dataset.
arXiv Detail & Related papers (2021-03-26T02:00:29Z) - Multi-class Generative Adversarial Nets for Semi-supervised Image
Classification [0.17404865362620794]
We show how similar images cause the GAN to generalize, leading to the poor classification of images.
We propose a modification to the traditional training of GANs that allows for improved multi-class classification in similar classes of images in a semi-supervised learning framework.
arXiv Detail & Related papers (2021-02-13T15:26:17Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.