Using conditional variational autoencoders to generate images from
atmospheric Cherenkov telescopes
- URL: http://arxiv.org/abs/2211.12553v1
- Date: Tue, 22 Nov 2022 20:05:35 GMT
- Title: Using conditional variational autoencoders to generate images from
atmospheric Cherenkov telescopes
- Authors: Stanislav Polyakov (1), Alexander Kryukov (1), Andrey Demichev (1),
Julia Dubenskaya (1), Elizaveta Gres (2), Anna Vlaskina (3) ((1) Skobeltsyn
Institute of Nuclear Physics, Lomonosov Moscow State University, (2) Applied
Physics Institute of Irkutsk State University, (3) Lomonosov Moscow State
University)
- Abstract summary: High-energy particles hitting the upper atmosphere of the Earth produce extensive air showers that can be detected from the ground level.
Images recorded by Cherenkov telescopes can be analyzed to separate gamma-ray events from the background hadron events.
We use a conditional variational autoencoder to generate images of gamma events from a Cherenkov telescope of the TAIGA experiment.
- Score: 48.7576911714538
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High-energy particles hitting the upper atmosphere of the Earth produce
extensive air showers that can be detected from the ground level using imaging
atmospheric Cherenkov telescopes. The images recorded by Cherenkov telescopes
can be analyzed to separate gamma-ray events from the background hadron events.
Many of the methods of analysis require simulation of massive amounts of events
and the corresponding images by the Monte Carlo method. However, Monte Carlo
simulation is computationally expensive. The data simulated by the Monte Carlo
method can be augmented by images generated using faster machine learning
methods such as generative adversarial networks or conditional variational
autoencoders. We use a conditional variational autoencoder to generate images
of gamma events from a Cherenkov telescope of the TAIGA experiment. The
variational autoencoder is trained on a set of Monte Carlo events with the
image size, or the sum of the amplitudes of the pixels, used as the conditional
parameter. We used the trained variational autoencoder to generate new images
with the same distribution of the conditional parameter as the size
distribution of the Monte Carlo-simulated images of gamma events. The generated
images are similar to the Monte Carlo images: a classifier neural network
trained on gamma and proton events assigns them the average gamma score 0.984,
with less than 3% of the events being assigned the gamma score below 0.999. At
the same time, the sizes of the generated images do not match the conditional
parameter used in their generation, with the average error 0.33.
Related papers
- Monte Carlo Path Tracing and Statistical Event Detection for Event Camera Simulation [9.80621423903019]
This paper presents a novel event camera simulation system based on physically based Monte Carlo path tracing with adaptive path sampling.
We are the first to simulate the behavior of event cameras in a physically accurate manner using an adaptive sampling technique in Monte Carlo path tracing.
arXiv Detail & Related papers (2024-08-15T07:46:51Z) - Selection of gamma events from IACT images with deep learning methods [0.0]
Imaging Atmospheric Cherenkov Telescopes (IACTs) of gamma ray observatory TAIGA detect the Extesnive Air Showers (EASs)
The ability to segregate gamma rays images from the hadronic cosmic ray background is one of the main features of this type of detectors.
In actual IACT observations simultaneous observation of the background and the source of gamma ray is needed.
This observation mode (called wobbling) modifies images of events, which affects the quality of selection by neural networks.
arXiv Detail & Related papers (2024-01-30T13:07:24Z) - Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis [55.41644538483948]
We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
arXiv Detail & Related papers (2022-11-28T22:30:33Z) - Anomaly Detection in Aerial Videos with Transformers [49.011385492802674]
We create a new dataset, named DroneAnomaly, for anomaly detection in aerial videos.
There are 87,488 color video frames (51,635 for training and 35,853 for testing) with the size of $640 times 640$ at 30 frames per second.
We present a new baseline model, ANomaly Detection with Transformers (ANDT), which treats consecutive video frames as a sequence of tubelets.
arXiv Detail & Related papers (2022-09-25T21:24:18Z) - Processing Images from Multiple IACTs in the TAIGA Experiment with
Convolutional Neural Networks [62.997667081978825]
We use convolutional neural networks (CNNs) to analyze Monte Carlo-simulated images from the TAIGA experiment.
The analysis includes selection of the images corresponding to the showers caused by gamma rays and estimating the energy of the gamma rays.
arXiv Detail & Related papers (2021-12-31T10:49:11Z) - Convolutional Deep Denoising Autoencoders for Radio Astronomical Images [0.0]
We apply a Machine Learning technique known as Convolutional Denoising Autoencoder to denoise synthetic images of state-of-the-art radio telescopes.
Our autoencoder can effectively denoise complex images identifying and extracting faint objects at the limits of the instrumental sensitivity.
arXiv Detail & Related papers (2021-10-16T17:08:30Z) - Deep learning with photosensor timing information as a background
rejection method for the Cherenkov Telescope Array [0.0]
New deep learning techniques present promising new analysis methods for Imaging Atmospheric Cherenkov Telescopes (IACTs)
CNNs could provide a direct event classification method that uses the entire information contained within the Cherenkov shower image.
arXiv Detail & Related papers (2021-03-10T13:54:43Z) - Swapping Autoencoder for Deep Image Manipulation [94.33114146172606]
We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation.
The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image.
Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
arXiv Detail & Related papers (2020-07-01T17:59:57Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.