Reduced Precision Strategies for Deep Learning: A High Energy Physics
Generative Adversarial Network Use Case
- URL: http://arxiv.org/abs/2103.10142v1
- Date: Thu, 18 Mar 2021 10:20:23 GMT
- Title: Reduced Precision Strategies for Deep Learning: A High Energy Physics
Generative Adversarial Network Use Case
- Authors: Florian Rehm, Sofia Vallecorsa, Vikram Saletore, Hans Pabst, Adel
Chaibi, Valeriu Codreanu, Kerstin Borras, Dirk Kr\"ucker
- Abstract summary: A promising approach to make deep learning more efficient is to quantize the parameters of the neural networks to reduced precision.
In this paper we analyse the effects of low precision inference on a complex deep generative adversarial network model.
- Score: 0.19788841311033123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning is finding its way into high energy physics by replacing
traditional Monte Carlo simulations. However, deep learning still requires an
excessive amount of computational resources. A promising approach to make deep
learning more efficient is to quantize the parameters of the neural networks to
reduced precision. Reduced precision computing is extensively used in modern
deep learning and results to lower execution inference time, smaller memory
footprint and less memory bandwidth. In this paper we analyse the effects of
low precision inference on a complex deep generative adversarial network model.
The use case which we are addressing is calorimeter detector simulations of
subatomic particle interactions in accelerator based high energy physics. We
employ the novel Intel low precision optimization tool (iLoT) for quantization
and compare the results to the quantized model from TensorFlow Lite. In the
performance benchmark we gain a speed-up of 1.73x on Intel hardware for the
quantized iLoT model compared to the initial, not quantized, model. With
different physics-inspired self-developed metrics, we validate that the
quantized iLoT model shows a lower loss of physical accuracy in comparison to
the TensorFlow Lite model.
Related papers
- Optimizing Dense Feed-Forward Neural Networks [0.0]
We propose a novel feed-forward neural network constructing method based on pruning and transfer learning.
Our approach can compress the number of parameters by more than 70%.
We also evaluate the transfer learning level comparing the refined model and the original one training from scratch a neural network.
arXiv Detail & Related papers (2023-12-16T23:23:16Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Towards provably efficient quantum algorithms for large-scale
machine-learning models [11.440134080370811]
We show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms.
We benchmark instances of large machine learning models from 7 million to 103 million parameters.
arXiv Detail & Related papers (2023-03-06T19:00:27Z) - Human Trajectory Prediction via Neural Social Physics [63.62824628085961]
Trajectory prediction has been widely pursued in many fields, and many model-based and model-free methods have been explored.
We propose a new method combining both methodologies based on a new Neural Differential Equation model.
Our new model (Neural Social Physics or NSP) is a deep neural network within which we use an explicit physics model with learnable parameters.
arXiv Detail & Related papers (2022-07-21T12:11:18Z) - LCS: Learning Compressible Subspaces for Adaptive Network Compression at
Inference Time [57.52251547365967]
We propose a method for training a "compressible subspace" of neural networks that contains a fine-grained spectrum of models.
We present results for achieving arbitrarily fine-grained accuracy-efficiency trade-offs at inference time for structured and unstructured sparsity.
Our algorithm extends to quantization at variable bit widths, achieving accuracy on par with individually trained networks.
arXiv Detail & Related papers (2021-10-08T17:03:34Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - ForceNet: A Graph Neural Network for Large-Scale Quantum Calculations [86.41674945012369]
We develop a scalable and expressive Graph Neural Networks model, ForceNet, to approximate atomic forces.
Our proposed ForceNet is able to predict atomic forces more accurately than state-of-the-art physics-based GNNs.
arXiv Detail & Related papers (2021-03-02T03:09:06Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - Training Deep Neural Networks with Constrained Learning Parameters [4.917317902787792]
A significant portion of deep learning tasks would run on edge computing systems.
We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA)
CoNNTrA trains deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets.
Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
arXiv Detail & Related papers (2020-09-01T16:20:11Z) - Automatic heterogeneous quantization of deep neural networks for
low-latency inference on the edge for particle detectors [5.609098985493794]
We introduce a method for designing optimally heterogeneously quantized versions of deep neural network models for minimum-energy, high-accuracy, nanosecond inference and fully automated deployment on chip.
This is crucial for the event selection procedure in proton-proton collisions at the CERN Large Hadron Collider, where resources are strictly limited and a latency of $mathcal O(1)mu$s is required.
arXiv Detail & Related papers (2020-06-15T15:07:49Z) - Fast Modeling and Understanding Fluid Dynamics Systems with
Encoder-Decoder Networks [0.0]
We show that an accurate deep-learning-based proxy model can be taught efficiently by a finite-volume-based simulator.
Compared to traditional simulation, the proposed deep learning approach enables much faster forward computation.
We quantify the sensitivity of the deep learning model to key physical parameters and hence demonstrate that the inversion problems can be solved with great acceleration.
arXiv Detail & Related papers (2020-06-09T17:14:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.