High-precision regressors for particle physics
- URL: http://arxiv.org/abs/2302.00753v1
- Date: Thu, 2 Feb 2023 16:55:12 GMT
- Title: High-precision regressors for particle physics
- Authors: Fady Bishara, Ayan Paul, and Jennifer Dy
- Abstract summary: Monte Carlo simulations of physics processes at particle colliders like the Large Hadron Collider at CERN take up a major fraction of the computational budget.
Since the necessary number of data points per simulation is on the order of $109$ - $1012$, machine learning regressors can be used in place of physics simulators.
We show that these regressors can speed up simulations by a factor of $103$ - $106$ over the first-principles computations currently used in Monte Carlo simulations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monte Carlo simulations of physics processes at particle colliders like the
Large Hadron Collider at CERN take up a major fraction of the computational
budget. For some simulations, a single data point takes seconds, minutes, or
even hours to compute from first principles. Since the necessary number of data
points per simulation is on the order of $10^9$ - $10^{12}$, machine learning
regressors can be used in place of physics simulators to significantly reduce
this computational burden. However, this task requires high-precision
regressors that can deliver data with relative errors of less than $1\%$ or
even $0.1\%$ over the entire domain of the function. In this paper, we develop
optimal training strategies and tune various machine learning regressors to
satisfy the high-precision requirement. We leverage symmetry arguments from
particle physics to optimize the performance of the regressors. Inspired by
ResNets, we design a Deep Neural Network with skip connections that outperform
fully connected Deep Neural Networks. We find that at lower dimensions, boosted
decision trees far outperform neural networks while at higher dimensions neural
networks perform significantly better. We show that these regressors can speed
up simulations by a factor of $10^3$ - $10^6$ over the first-principles
computations currently used in Monte Carlo simulations. Additionally, using
symmetry arguments derived from particle physics, we reduce the number of
regressors necessary for each simulation by an order of magnitude. Our work can
significantly reduce the training and storage burden of Monte Carlo simulations
at current and future collider experiments.
Related papers
- Cosmological Analysis with Calibrated Neural Quantile Estimation and Approximate Simulators [0.0]
We introduce a new Simulation-Based Inference ( SBI) method that leverages a large number of approximate simulations for training and a small number of high-fidelity simulations for calibration.
As a proof of concept, we demonstrate that cosmological parameters can be inferred at field level from projected 2-dim dark matter density maps up to $k_rm maxsim1.5,h$/Mpc at $z=0$.
The calibrated posteriors closely match those obtained by directly training on $sim104$ expensive Particle-Particle (PP) simulations, but at a fraction of the computational cost
arXiv Detail & Related papers (2024-11-22T05:53:46Z) - SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks [4.532517021515834]
Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials.
We introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
arXiv Detail & Related papers (2023-12-28T18:48:10Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Machine Learning methods for simulating particle response in the Zero
Degree Calorimeter at the ALICE experiment, CERN [8.980453507536017]
Currently, over half of the computing power at CERN GRID is used to run High Energy Physics simulations.
The recent updates at the Large Hadron Collider (LHC) create the need for developing more efficient simulation methods.
We propose an alternative approach to the problem that leverages machine learning.
arXiv Detail & Related papers (2023-06-23T16:45:46Z) - Fast emulation of cosmological density fields based on dimensionality
reduction and supervised machine-learning [0.0]
We show that it is possible to perform fast dark matter density field emulations with competitive accuracy using simple machine-learning approaches.
New density cubes for different cosmological parameters can be estimated without relying directly on new N-body simulations.
arXiv Detail & Related papers (2023-04-12T18:29:26Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Reduced Precision Strategies for Deep Learning: A High Energy Physics
Generative Adversarial Network Use Case [0.19788841311033123]
A promising approach to make deep learning more efficient is to quantize the parameters of the neural networks to reduced precision.
In this paper we analyse the effects of low precision inference on a complex deep generative adversarial network model.
arXiv Detail & Related papers (2021-03-18T10:20:23Z) - Data-Efficient Learning for Complex and Real-Time Physical Problem
Solving using Augmented Simulation [49.631034790080406]
We present a task for navigating a marble to the center of a circular maze.
We present a model that learns to move a marble in the complex environment within minutes of interacting with the real system.
arXiv Detail & Related papers (2020-11-14T02:03:08Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.