Selection of gamma events from IACT images with deep learning methods
- URL: http://arxiv.org/abs/2401.16981v1
- Date: Tue, 30 Jan 2024 13:07:24 GMT
- Title: Selection of gamma events from IACT images with deep learning methods
- Authors: E. O. Gres, A. P. Kryukov, A. P. Demichev, J. J. Dubenskaya, S. P.
Polyakov, A. A. Vlaskina, D. P. Zhurov
- Abstract summary: Imaging Atmospheric Cherenkov Telescopes (IACTs) of gamma ray observatory TAIGA detect the Extesnive Air Showers (EASs)
The ability to segregate gamma rays images from the hadronic cosmic ray background is one of the main features of this type of detectors.
In actual IACT observations simultaneous observation of the background and the source of gamma ray is needed.
This observation mode (called wobbling) modifies images of events, which affects the quality of selection by neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Imaging Atmospheric Cherenkov Telescopes (IACTs) of gamma ray observatory
TAIGA detect the Extesnive Air Showers (EASs) originating from the cosmic or
gamma rays interactions with the atmosphere. Thereby, telescopes obtain images
of the EASs. The ability to segregate gamma rays images from the hadronic
cosmic ray background is one of the main features of this type of detectors.
However, in actual IACT observations simultaneous observation of the background
and the source of gamma ray is needed. This observation mode (called wobbling)
modifies images of events, which affects the quality of selection by neural
networks.
Thus, in this work, the results of the application of neural networks (NN)
for image classification task on Monte Carlo (MC) images of TAIGA-IACTs are
presented. The wobbling mode is considered together with the image adaptation
for adequate analysis by NNs. Simultaneously, we explore several neural network
structures that classify events both directly from images or through Hillas
parameters extracted from images. In addition, by employing NNs, MC simulation
data are used to evaluate the quality of the segregation of rare gamma events
with the account of all necessary image modifications.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Image Restoration with Point Spread Function Regularization and Active
Learning [5.575847437953924]
Large-scale astronomical surveys can capture numerous images of celestial objects, including galaxies and nebulae.
varying noise levels and point spread functions can hamper the accuracy and efficiency of information extraction from these images.
We propose a novel image restoration algorithm that connects a deep learning-based restoration algorithm with a high-fidelity telescope simulator.
arXiv Detail & Related papers (2023-10-31T23:16:26Z) - Visualization for Multivariate Gaussian Anomaly Detection in Images [0.0]
This paper introduces a simplified variation of the PaDiM (Pixel-Wise Anomaly Detection through Instance Modeling) method for anomaly detection in images.
We introduce an intermediate step in this framework by applying a whitening transformation to the feature vectors.
The results show the importance of visual model validation, providing insights into issues that were otherwise invisible.
arXiv Detail & Related papers (2023-07-12T10:12:57Z) - Using conditional variational autoencoders to generate images from
atmospheric Cherenkov telescopes [48.7576911714538]
High-energy particles hitting the upper atmosphere of the Earth produce extensive air showers that can be detected from the ground level.
Images recorded by Cherenkov telescopes can be analyzed to separate gamma-ray events from the background hadron events.
We use a conditional variational autoencoder to generate images of gamma events from a Cherenkov telescope of the TAIGA experiment.
arXiv Detail & Related papers (2022-11-22T20:05:35Z) - Supervised classification methods applied to airborne hyperspectral
images: Comparative study using mutual information [0.0]
This paper investigates the performance of four supervised learning algorithms, namely, Support Vector Machines SVM, Random Forest RF, K-Nearest Neighbors KNN and Linear Discriminant Analysis LDA.
The experiments have been performed on three real hyperspectral datasets taken from the NASA's Airborne Visible/Infrared Imaging Spectrometer Sensor AVIRIS and the Reflective Optics System Imaging Spectrometer ROSIS sensors.
arXiv Detail & Related papers (2022-10-27T13:39:08Z) - Processing Images from Multiple IACTs in the TAIGA Experiment with
Convolutional Neural Networks [62.997667081978825]
We use convolutional neural networks (CNNs) to analyze Monte Carlo-simulated images from the TAIGA experiment.
The analysis includes selection of the images corresponding to the showers caused by gamma rays and estimating the energy of the gamma rays.
arXiv Detail & Related papers (2021-12-31T10:49:11Z) - Analysis of the HiSCORE Simulated Events in TAIGA Experiment Using
Convolutional Neural Networks [77.34726150561087]
We propose to consider the use of convolution neural networks in task of air shower characteristics determination.
We use CNN to analyze HiSCORE events, treating them like images.
In addition, we present some preliminary results on the determination of the parameters of air showers.
arXiv Detail & Related papers (2021-12-19T15:18:56Z) - The Preliminary Results on Analysis of TAIGA-IACT Images Using
Convolutional Neural Networks [68.8204255655161]
The aim of the work is to study the possibility of the machine learning application to solve the tasks set for TAIGA-IACT.
The method of Convolutional Neural Networks (CNN) was applied to process and analyze Monte-Carlo events simulated with CORSIKA.
arXiv Detail & Related papers (2021-12-19T15:17:20Z) - Deep learning with photosensor timing information as a background
rejection method for the Cherenkov Telescope Array [0.0]
New deep learning techniques present promising new analysis methods for Imaging Atmospheric Cherenkov Telescopes (IACTs)
CNNs could provide a direct event classification method that uses the entire information contained within the Cherenkov shower image.
arXiv Detail & Related papers (2021-03-10T13:54:43Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.