First Full-Event Reconstruction from Imaging Atmospheric Cherenkov
Telescope Real Data with Deep Learning
- URL: http://arxiv.org/abs/2105.14927v1
- Date: Mon, 31 May 2021 12:51:42 GMT
- Title: First Full-Event Reconstruction from Imaging Atmospheric Cherenkov
Telescope Real Data with Deep Learning
- Authors: Mika\"el Jacquemont (LAPP), Thomas Vuillaume (LAPP), Alexandre Benoit
(LISTIC), Gilles Maurin (LAPP), Patrick Lambert (LISTIC), Giovanni Lamanna
(LAPP)
- Abstract summary: The Cherenkov Telescope Array is the future of ground-based gamma-ray astronomy.
Its first prototype telescope built on-site, the Large Size Telescope 1, is currently under commissioning and taking its first scientific data.
We present for the first time the development of a full-event reconstruction based on deep convolutional neural networks and its application to real data.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Cherenkov Telescope Array is the future of ground-based gamma-ray
astronomy. Its first prototype telescope built on-site, the Large Size
Telescope 1, is currently under commissioning and taking its first scientific
data. In this paper, we present for the first time the development of a
full-event reconstruction based on deep convolutional neural networks and its
application to real data. We show that it outperforms the standard analysis,
both on simulated and on real data, thus validating the deep approach for the
CTA data analysis. This work also illustrates the difficulty of moving from
simulated data to actual data.
Related papers
- TelescopeML -- I. An End-to-End Python Package for Interpreting Telescope Datasets through Training Machine Learning Models, Generating Statistical Reports, and Visualizing Results [1.3372051498158442]
textttTelescopeML is a Python package developed to perform three main tasks.
Process the synthetic astronomical datasets for training a CNN model and prepare the observational dataset for later use for prediction.
arXiv Detail & Related papers (2024-07-24T00:44:52Z) - Spherinator and HiPSter: Representation Learning for Unbiased Knowledge Discovery from Simulations [0.0]
We describe a new, unbiased, and machine learning based approach to obtain useful scientific insights from a broad range of simulations.
Our concept is based on applying nonlinear dimensionality reduction to learn compact representations of the data in a low-dimensional space.
We present a prototype using a rotational invariant hyperspherical variational convolutional autoencoder, utilizing a power distribution in the latent space, and trained on galaxies from IllustrisTNG simulation.
arXiv Detail & Related papers (2024-06-06T07:34:58Z) - ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal
Prediction [55.30913411696375]
We propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which introduces corresponding sizes of receptive field modules.
In the encoder, we present large kernel module for globaltemporal feature extraction. In the decoder, we develop a small kernel module for localtemporal reconstruction.
We construct the RainBench, a large-scale radar echo dataset for precipitation prediction, to address the scarcity of meteorological data in the domain.
arXiv Detail & Related papers (2023-09-01T07:55:53Z) - F2SD: A dataset for end-to-end group detection algorithms [3.3117512968892355]
We develop a large-scale dataset of simulated images for F-formation detection, called F-formation Simulation dataset (F2SD)
F2SD contains nearly 60,000 images simulated from GTA-5, with bounding boxes and orientation information on images.
It is challenging to construct such a large-scale simulated dataset while keeping it realistic.
arXiv Detail & Related papers (2022-11-20T15:42:22Z) - PCGen: Point Cloud Generator for LiDAR Simulation [10.692184635629792]
Existing methods generate data which are more noisy and complete than the real point clouds.
We propose FPA raycasting and surrogate model raydrop.
With minimal training data, the surrogate model can generalize to different geographies and scenes.
Results show that object detection models trained by simulation data can achieve similar result as the real data trained model.
arXiv Detail & Related papers (2022-10-17T04:13:21Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in
Adverse Weather [92.84066576636914]
This work addresses the challenging task of LiDAR-based 3D object detection in foggy weather.
We tackle this problem by simulating physically accurate fog into clear-weather scenes.
We are the first to provide strong 3D object detection baselines on the Seeing Through Fog dataset.
arXiv Detail & Related papers (2021-08-11T14:37:54Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - DeepMerge II: Building Robust Deep Learning Algorithms for Merging
Galaxy Identification Across Domains [0.0]
In astronomy, neural networks are often trained on simulation data with the prospect of being used on telescope observations.
We show that the addition of each domain adaptation technique improves the performance of a classifier when compared to conventional deep learning algorithms.
We demonstrate this on two examples: between two Illustris-1 simulated datasets of distant merging galaxies, and between Illustris-1 simulated data of nearby merging galaxies and observed data from the Sloan Digital Sky Survey.
arXiv Detail & Related papers (2021-03-02T00:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.