Function Approximation for High-Energy Physics: Comparing Machine
Learning and Interpolation Methods
- URL: http://arxiv.org/abs/2111.14788v1
- Date: Mon, 29 Nov 2021 18:43:57 GMT
- Title: Function Approximation for High-Energy Physics: Comparing Machine
Learning and Interpolation Methods
- Authors: Ibrahim Chahrour and James D. Wells
- Abstract summary: In high-energy physics, the precise computation of the scattering cross-section of a process requires the evaluation of computationally intensive integrals.
A variety of methods in machine learning have been used to tackle this problem, but often the motivation of using one method over another is lacking.
We consider four and three machine learning techniques and compare their performance on three toy functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The need to approximate functions is ubiquitous in science, either due to
empirical constraints or high computational cost of accessing the function. In
high-energy physics, the precise computation of the scattering cross-section of
a process requires the evaluation of computationally intensive integrals. A
wide variety of methods in machine learning have been used to tackle this
problem, but often the motivation of using one method over another is lacking.
Comparing these methods is typically highly dependent on the problem at hand,
so we specify to the case where we can evaluate the function a large number of
times, after which quick and accurate evaluation can take place. We consider
four interpolation and three machine learning techniques and compare their
performance on three toy functions, the four-point scalar Passarino-Veltman
$D_0$ function, and the two-loop self-energy master integral $M$. We find that
in low dimensions ($d = 3$), traditional interpolation techniques like the
Radial Basis Function perform very well, but in higher dimensions ($d=5, 6, 9$)
we find that multi-layer perceptrons (a.k.a neural networks) do not suffer as
much from the curse of dimensionality and provide the fastest and most accurate
predictions.
Related papers
- An Approximation Theory Perspective on Machine Learning [1.2289361708127877]
We will discuss emerging trends in machine learning including the role of shallow/deep networks.<n>Despite function approximation being a fundamental problem in machine learning, approximation theory does not play a central role in the theoretical foundations of the field.
arXiv Detail & Related papers (2025-06-02T18:50:18Z) - Fast and close Shannon entropy approximation [0.0]
A non-singular rational approximation of Shannon entropy and its gradient achieves a mean absolute error of $10-3$.<n>FEA allows around $50%$ faster computation, requiring only $5$ to $6$ elementary computational operations.<n>On a set of common benchmarks for the feature selection problem in machine learning, we show that the combined effect of fewer elementary operations, low approximation error, and a non-singular gradient allows significantly better model quality.
arXiv Detail & Related papers (2025-05-20T11:41:26Z) - Fast Causal Discovery by Approximate Kernel-based Generalized Score Functions with Linear Computational Complexity [29.444911198185206]
We propose an approximate kernel-based generalized score function with $mathcalO(n)$ time and space complexities.
Our method can not only significantly reduce computational costs, but also achieve comparable accuracy, especially for large datasets.
arXiv Detail & Related papers (2024-12-23T16:51:45Z) - Approximation Theory, Computing, and Deep Learning on the Wasserstein Space [0.5735035463793009]
We address the challenge of approximating functions in infinite-dimensional spaces from finite samples.
Our focus is on the Wasserstein distance function, which serves as a relevant example.
We adopt three machine learning-based approaches to define functional approximants.
arXiv Detail & Related papers (2023-10-30T13:59:47Z) - Solving multiscale elliptic problems by sparse radial basis function
neural networks [3.5297361401370044]
We propose a sparse radial basis function neural network method to solve elliptic partial differential equations (PDEs) with multiscale coefficients.
Inspired by the deep mixed residual method, we rewrite the second-order problem into a first-order system and employ multiple radial basis function neural networks (RBFNNs) to approximate unknown functions in the system.
The accuracy and effectiveness of the proposed method are demonstrated through a collection of multiscale problems with scale separation, discontinuity and multiple scales from one to three dimensions.
arXiv Detail & Related papers (2023-09-01T15:11:34Z) - Integrated Variational Fourier Features for Fast Spatial Modelling with Gaussian Processes [7.5991638205413325]
For $N$ training points, exact inference has $O(N3)$ cost; with $M ll N$ features, state of the art sparse variational methods have $O(NM2)$ cost.
Recently, methods have been proposed using more sophisticated features; these promise $O(M3)$ cost, with good performance in low dimensional tasks such as spatial modelling, but they only work with a very limited class of kernels, excluding some of the most commonly used.
In this work, we propose integrated Fourier features, which extends these performance benefits to a very broad class of stationary co
arXiv Detail & Related papers (2023-08-27T15:44:28Z) - Comparison of Neural FEM and Neural Operator Methods for applications in
Solid Mechanics [0.0]
The current work investigates two classes, Neural FEM and Neural Operator Methods, for the use in elastostatics by means of numerical experiments.
Main differences between the two classes are the computational effort and accuracy.
arXiv Detail & Related papers (2023-07-04T06:16:43Z) - Neural Operator: Is data all you need to model the world? An insight
into the impact of Physics Informed Machine Learning [13.050410285352605]
We provide an insight into how data-driven approaches can complement conventional techniques to solve engineering and physics problems.
We highlight a novel and fast machine learning-based approach to learning the solution operator of a PDE operator learning.
arXiv Detail & Related papers (2023-01-30T23:29:33Z) - Batch-efficient EigenDecomposition for Small and Medium Matrices [65.67315418971688]
EigenDecomposition (ED) is at the heart of many computer vision algorithms and applications.
We propose a QR-based ED method dedicated to the application scenarios of computer vision.
arXiv Detail & Related papers (2022-07-09T09:14:12Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.