Deep learning with photosensor timing information as a background
rejection method for the Cherenkov Telescope Array
- URL: http://arxiv.org/abs/2103.06054v1
- Date: Wed, 10 Mar 2021 13:54:43 GMT
- Title: Deep learning with photosensor timing information as a background
rejection method for the Cherenkov Telescope Array
- Authors: Samuel Spencer, Thomas Armstrong, Jason Watson, Salvatore Mangano,
Yves Renier, Garret Cotter
- Abstract summary: New deep learning techniques present promising new analysis methods for Imaging Atmospheric Cherenkov Telescopes (IACTs)
CNNs could provide a direct event classification method that uses the entire information contained within the Cherenkov shower image.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: New deep learning techniques present promising new analysis methods for
Imaging Atmospheric Cherenkov Telescopes (IACTs) such as the upcoming Cherenkov
Telescope Array (CTA). In particular, the use of Convolutional Neural Networks
(CNNs) could provide a direct event classification method that uses the entire
information contained within the Cherenkov shower image, bypassing the need to
Hillas parameterise the image and allowing fast processing of the data.
Existing work in this field has utilised images of the integrated charge from
IACT camera photomultipliers, however the majority of current and upcoming
generation IACT cameras have the capacity to read out the entire photosensor
waveform following a trigger. As the arrival times of Cherenkov photons from
Extensive Air Showers (EAS) at the camera plane are dependent upon the altitude
of their emission and the impact distance from the telescope, these waveforms
contain information potentially useful for IACT event classification.
In this test-of-concept simulation study, we investigate the potential for
using these camera pixel waveforms with new deep learning techniques as a
background rejection method, against both proton and electron induced EAS. We
find that a means of utilising their information is to create a set of seven
additional 2-dimensional pixel maps of waveform parameters, to be fed into the
machine learning algorithm along with the integrated charge image. Whilst we
ultimately find that the only classification power against electrons is based
upon event direction, methods based upon timing information appear to
out-perform similar charge based methods for gamma/hadron separation. We also
review existing methods of event classifications using a combination of deep
learning and timing information in other astroparticle physics experiments.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Enhancing Events in Neutrino Telescopes through Deep Learning-Driven Super-Resolution [0.0]
We propose a novel technique that learns photon transport through the detector medium through the use of deep learning-driven super-resolution of data events.
Our strategy arranges additional virtual'' optical modules within an existing detector geometry and trains a convolutional neural network to predict the hits on these virtual optical modules.
arXiv Detail & Related papers (2024-08-16T01:20:27Z) - Deep(er) Reconstruction of Imaging Cherenkov Detectors with Swin Transformers and Normalizing Flow Models [0.0]
Imaging Cherenkov detectors are crucial for particle identification (PID) in nuclear and particle physics experiments.
This paper focuses on the DIRC detector, which presents complex hit patterns and is also used for PID of pions and kaons in the GlueX experiment at JLab.
We present Deep(er)RICH, an extension of the seminal DeepRICH work, offering improved and faster PID compared to traditional methods.
arXiv Detail & Related papers (2024-07-10T05:37:02Z) - A Data-Driven Approach for Mitigating Dark Current Noise and Bad Pixels in Complementary Metal Oxide Semiconductor Cameras for Space-based Telescopes [2.4489471766462625]
We introduce a data-driven framework for mitigating dark current noise and bad pixels for CMOS cameras.
Our approach involves two key steps: pixel clustering and function fitting.
Results show a considerable improvement in the detection efficiency of space-based telescopes.
arXiv Detail & Related papers (2024-03-15T11:15:06Z) - Image Restoration with Point Spread Function Regularization and Active
Learning [5.575847437953924]
Large-scale astronomical surveys can capture numerous images of celestial objects, including galaxies and nebulae.
varying noise levels and point spread functions can hamper the accuracy and efficiency of information extraction from these images.
We propose a novel image restoration algorithm that connects a deep learning-based restoration algorithm with a high-fidelity telescope simulator.
arXiv Detail & Related papers (2023-10-31T23:16:26Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Energy Reconstruction in Analysis of Cherenkov Telescopes Images in
TAIGA Experiment Using Deep Learning Methods [0.0]
This paper presents the analysis of simulated Monte Carlo images by several Deep Learning methods for a single telescope (mono-mode) and multiple IACT telescopes (stereo-mode)
The estimation of the quality of energy reconstruction was carried out and their energy spectra were analyzed using several types of neural networks.
arXiv Detail & Related papers (2022-11-16T15:24:32Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.