A Convolutional Neural Network based Cascade Reconstruction for the
IceCube Neutrino Observatory
- URL: http://arxiv.org/abs/2101.11589v1
- Date: Wed, 27 Jan 2021 18:34:58 GMT
- Title: A Convolutional Neural Network based Cascade Reconstruction for the
IceCube Neutrino Observatory
- Authors: R. Abbasi, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M.
Ahrens, C. Alispach, A. A. Alves Jr., N. M. Amin, R. An, K. Andeen, T.
Anderson, I. Ansseau, G. Anton, C. Arg\"uelles, S. Axani, X. Bai, A.
Balagopal V., A. Barbano, S. W. Barwick, B. Bastian, V. Basu, V. Baum, S.
Baur, R. Bay, J. J. Beatty, K.-H. Becker, J. Becker Tjus, C. Bellenghi, S.
BenZvi, D. Berley, E. Bernardini, D. Z. Besson, G. Binder, D. Bindig, E.
Blaufuss, S. Blot, S. B\"oser, O. Botner, J. B\"ottcher, E. Bourbeau, J.
Bourbeau, F. Bradascio, J. Braun, S. Bron, J. Brostean-Kaiser, A. Burgman, R.
S. Busse, M. A. Campana, C. Chen, D. Chirkin, S. Choi, B. A. Clark, K. Clark,
L. Classen, A. Coleman, G. H. Collin, J. M. Conrad, P. Coppin, P. Correa, D.
F. Cowen, R. Cross, P. Dave, C. De Clercq, J. J. DeLaunay, H. Dembinski, K.
Deoskar, S. De Ridder, A. Desai, P. Desiati, K. D. de Vries, G. de Wasseige,
M. de With, T. DeYoung, S. Dharani, A. Diaz, J. C. D\'iaz-V\'elez, H.
Dujmovic, M. Dunkman, M. A. DuVernois, E. Dvorak, T. Ehrhardt, P. Eller, R.
Engel, J. Evans, P. A. Evenson, S. Fahey, A. R. Fazely, S. Fiedlschuster,
A.T. Fienberg, K. Filimonov, C. Finley, L. Fischer, D. Fox, A. Franckowiak,
E. Friedman, A. Fritz, P. F\"urst, T. K. Gaisser, J. Gallagher, E. Ganster,
S. Garrappa, L. Gerhardt, A. Ghadimi, C. Glaser, T. Glauch, T. Gl\"usenkamp,
A. Goldschmidt, J. G. Gonzalez, S. Goswami, D. Grant, T. Gr\'egoire, Z.
Griffith, S. Griswold, M. G\"und\"uz, C. Haack, A. Hallgren, R. Halliday, L.
Halve, F. Halzen, M. Ha Minh, K. Hanson, J. Hardin, A. A. Harnisch, A.
Haungs, S. Hauser, D. Hebecker, K. Helbing, F. Henningsen, E. C. Hettinger,
S. Hickford, J. Hignight, C. Hill, G. C. Hill, K. D. Hoffman, R. Hoffmann, T.
Hoinka, B. Hokanson-Fasig, K. Hoshina, F. Huang, M. Huber, T. Huber, K.
Hultqvist, M. H\"unnefeld, R. Hussain, S. In, N. Iovine, A. Ishihara, M.
Jansson, G. S. Japaridze, M. Jeong, B. J. P. Jones, R. Joppe, D. Kang, W.
Kang, X. Kang, A. Kappes, D. Kappesser, T. Karg, M. Karl, A. Karle, U. Katz,
M. Kauer, M. Kellermann, J. L. Kelley, A. Kheirandish, J. Kim, K. Kin, T.
Kintscher, J. Kiryluk, S. R. Klein, R. Koirala, H. Kolanoski, L. K\"opke, C.
Kopper, S. Kopper, D. J. Koskinen, P. Koundal, M. Kovacevich, M. Kowalski, K.
Krings, G. Kr\"uckl, N. Kurahashi, A. Kyriacou, C. Lagunas Gualda, J. L.
Lanfranchi, M. J. Larson, F. Lauber, J. P. Lazar, K. Leonard, A.
Leszczy\'nska, Y. Li, Q. R. Liu, E. Lohfink, C. J. Lozano Mariscal, L. Lu, F.
Lucarelli, A. Ludwig, W. Luszczak, Y. Lyu, W. Y. Ma, J. Madsen, K. B. M.
Mahn, Y. Makino, P. Mallik, S. Mancina, I. C. Mari{\c{s}}, R. Maruyama, K.
Mase, F. McNally, K. Meagher, A. Medina, M. Meier, S. Meighen-Berger, J.
Merz, J. Micallef, D. Mockler, G. Moment\'e, T. Montaruli, R. W. Moore, K.
Morik, R. Morse, M. Moulai, R. Naab, R. Nagai, U. Naumann, J. Necker, L. V.
Nguy{\~{\^{{e}}}}n, H. Niederhausen, M. U. Nisa, S. C. Nowicki, D. R. Nygren,
A. Obertacke Pollmann, M. Oehler, A. Olivas, E. O'Sullivan, H. Pandya, D. V.
Pankova, N. Park, G. K. Parker, E. N. Paudel, P. Peiffer, C. P\'erez de los
Heros, S. Philippen, D. Pieloth, S. Pieper, A. Pizzuto, M. Plum, Y. Popovych,
A. Porcelli, M. Prado Rodriguez, P. B. Price, B. Pries, G. T. Przybylski, C.
Raab, A. Raissi, M. Rameez, K. Rawlins, I. C. Rea, A. Rehman, R. Reimann, M.
Renschler, G. Renzi, E. Resconi, S. Reusch, W. Rhode, M. Richman, B. Riedel,
S. Robertson, G. Roellinghoff, M. Rongen, C. Rott, T. Ruhe, D. Ryckbosch, D.
Rysewyk Cantu, I. Safa, S. E. Sanchez Herrera, A. Sandrock, J. Sandroos, M.
Santander, S. Sarkar, S. Sarkar, K. Satalecka, M. Scharf, M. Schaufel, H.
Schieler, P. Schlunder, T. Schmidt, A. Schneider, J. Schneider, F. G.
Schr\"oder, L. Schumacher, S. Sclafani, D. Seckel, S. Seunarine, A. Sharma,
S. Shefali, M. Silva, B. Skrzypek, B. Smithers, R. Snihur, J. Soedingrekso,
D. Soldin, G. M. Spiczak, C. Spiering, J. Stachurska, M. Stamatikos, T.
Stanev, R. Stein, J. Stettner, A. Steuer, T. Stezelberger, R. G. Stokstad, T.
St\"urwald, T. Stuttard, G. W. Sullivan, I. Taboada, F. Tenholt, S.
Ter-Antonyan, S. Tilav, F. Tischbein, K. Tollefson, L. Tomankova, C.
T\"onnis, S. Toscano, D. Tosi, A. Trettin, M. Tselengidou, C. F. Tung, A.
Turcati, R. Turcotte, C. F. Turley, J. P. Twagirayezu, B. Ty, M. A. Unland
Elorrieta, N. Valtonen-Mattila, J. Vandenbroucke, D. van Eijk, N. van
Eijndhoven, D. Vannerom, J. van Santen, S. Verpoest, M. Vraeghe, C. Walck, A.
Wallace, T. B. Watson, C. Weaver, A. Weindl, M. J. Weiss, J. Weldert, C.
Wendt, J. Werthebach, M. Weyrauch, B. J. Whelan, N. Whitehorn, K. Wiebe, C.
H. Wiebusch, D. R. Williams, M. Wolf, K. Woschnagg, G. Wrede, J. Wulff, X. W.
Xu, Y. Xu, J. P. Yanez, S. Yoshida, T. Yuan, Z. Zhang
- Abstract summary: Deep neural networks can be extremely powerful, and their usage is computationally inexpensive once the networks are trained.
A reconstruction method based on convolutional architectures and hexagonally shaped kernels is presented.
It can improve upon the reconstruction accuracy, while reducing the time necessary to run the reconstruction by two to three orders of magnitude.
- Score: 0.4282223735043171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continued improvements on existing reconstruction methods are vital to the
success of high-energy physics experiments, such as the IceCube Neutrino
Observatory. In IceCube, further challenges arise as the detector is situated
at the geographic South Pole where computational resources are limited.
However, to perform real-time analyses and to issue alerts to telescopes around
the world, powerful and fast reconstruction methods are desired. Deep neural
networks can be extremely powerful, and their usage is computationally
inexpensive once the networks are trained. These characteristics make a deep
learning-based approach an excellent candidate for the application in IceCube.
A reconstruction method based on convolutional architectures and hexagonally
shaped kernels is presented. The presented method is robust towards systematic
uncertainties in the simulation and has been tested on experimental data. In
comparison to standard reconstruction methods in IceCube, it can improve upon
the reconstruction accuracy, while reducing the time necessary to run the
reconstruction by two to three orders of magnitude.
Related papers
- Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - 2D Convolutional Neural Network for Event Reconstruction in IceCube
DeepCore [0.0]
IceCube DeepCore is an extension of the IceCube Neutrino Observatory designed to measure GeV scale atmospheric neutrino interactions.
Distinguishing muon neutrinos from other flavors and reconstructing inelasticity are especially difficult tasks at GeV scale energies.
We present a new CNN model that exploits time and depth translational symmetry in IceCube DeepCore data.
arXiv Detail & Related papers (2023-07-31T02:37:36Z) - Understanding Reconstruction Attacks with the Neural Tangent Kernel and
Dataset Distillation [110.61853418925219]
We build a stronger version of the dataset reconstruction attack and show how it can provably recover the emphentire training set in the infinite width regime.
We show that both theoretically and empirically, reconstructed images tend to "outliers" in the dataset.
These reconstruction attacks can be used for textitdataset distillation, that is, we can retrain on reconstructed images and obtain high predictive accuracy.
arXiv Detail & Related papers (2023-02-02T21:41:59Z) - Physics-informed neural networks for gravity currents reconstruction
from limited data [0.0]
The present work investigates the use of physics-informed neural networks (PINNs) for the 3D reconstruction of unsteady gravity currents from limited data.
In the PINN context, the flow fields are reconstructed by training a neural network whose objective function penalizes the mismatch between the network predictions and the observed data.
arXiv Detail & Related papers (2022-11-03T11:27:29Z) - A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in
Single-View 3D Reconstruction Networks [16.348294592961327]
We introduce the dispersion score, a new data-driven metric, to quantify this leading factor and study its effect on NNs.
We show that the proposed metric is a principal way to analyze reconstruction quality and provides novel information in addition to the conventional reconstruction score.
arXiv Detail & Related papers (2021-11-30T06:33:35Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - Backward Gradient Normalization in Deep Neural Networks [68.8204255655161]
We introduce a new technique for gradient normalization during neural network training.
The gradients are rescaled during the backward pass using normalization layers introduced at certain points within the network architecture.
Results on tests with very deep neural networks show that the new technique can do an effective control of the gradient norm.
arXiv Detail & Related papers (2021-06-17T13:24:43Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Multi-fidelity Neural Architecture Search with Knowledge Distillation [69.09782590880367]
We propose a bayesian multi-fidelity method for neural architecture search: MF-KD.
Knowledge distillation adds to a loss function a term forcing a network to mimic some teacher network.
We show that training for a few epochs with such a modified loss function leads to a better selection of neural architectures than training for a few epochs with a logistic loss.
arXiv Detail & Related papers (2020-06-15T12:32:38Z) - Baryon acoustic oscillations reconstruction using convolutional neural
networks [1.9262162668141078]
We propose a new scheme to reconstruct the baryon acoustic oscillations (BAO) signal, which contains key cosmological information, based on deep convolutional neural networks (CNN)
We find that the network trained in one cosmology is able to reconstruct BAO peaks in the others, i.e. recovering information lost to non-linearity independent of cosmology.
arXiv Detail & Related papers (2020-02-24T13:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.