Computing Anti-Derivatives using Deep Neural Networks
- URL: http://arxiv.org/abs/2209.09084v1
- Date: Mon, 19 Sep 2022 15:16:47 GMT
- Title: Computing Anti-Derivatives using Deep Neural Networks
- Authors: D. Chakraborty and S. Gopalakrishnan
- Abstract summary: This paper presents a novel algorithm to obtain the closed-form anti-derivative of a function using Deep Neural Network architecture.
We claim that using a single method for all integrals, our algorithm can approximate anti-derivatives to any required accuracy.
This paper also shows the applications of our method to get the closed-form expressions of elliptic integrals, Fermi-Dirac integrals, and cumulative distribution functions.
- Score: 3.42658286826597
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a novel algorithm to obtain the closed-form
anti-derivative of a function using Deep Neural Network architecture. In the
past, mathematicians have developed several numerical techniques to approximate
the values of definite integrals, but primitives or indefinite integrals are
often non-elementary. Anti-derivatives are necessarily required when there are
several parameters in an integrand and the integral obtained is a function of
those parameters. There is no theoretical method that can do this for any given
function. Some existing ways to get around this are primarily based on either
curve fitting or infinite series approximation of the integrand, which is then
integrated theoretically. Curve fitting approximations are inaccurate for
highly non-linear functions and require a different approach for every problem.
On the other hand, the infinite series approach does not give a closed-form
solution, and their truncated forms are often inaccurate. We claim that using a
single method for all integrals, our algorithm can approximate anti-derivatives
to any required accuracy. We have used this algorithm to obtain the
anti-derivatives of several functions, including non-elementary and oscillatory
integrals. This paper also shows the applications of our method to get the
closed-form expressions of elliptic integrals, Fermi-Dirac integrals, and
cumulative distribution functions and decrease the computation time of the
Galerkin method for differential equations.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - PINNIES: An Efficient Physics-Informed Neural Network Framework to Integral Operator Problems [0.0]
This paper introduces an efficient tensor-vector product technique for the approximation of integral operators within physics-informed deep learning frameworks.
We demonstrate the applicability of this method to both Fredholm and Volterra integral operators.
We also propose a fast matrix-vector product algorithm for efficiently computing the fractional Caputo derivative.
arXiv Detail & Related papers (2024-09-03T13:43:58Z) - Fixed Integral Neural Networks [2.2118683064997273]
We present a method for representing the analytical integral of a learned function $f$.
This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised.
We also introduce a method to constrain $f$ to be positive, a necessary condition for many applications.
arXiv Detail & Related papers (2023-07-26T18:16:43Z) - A novel way of calculating scattering integrals [0.0]
The technique coined as NDIM - Negative Dimensional Integration Method by their discoverers, relies on a three-pronged basis: Gaussian integration, series expansion and analytic continuation.
We show how this technique can be applied to tackle certain improper integrals and give an example of a particular improper integral that appears in quantum mechanical scattering process.
arXiv Detail & Related papers (2023-01-10T17:14:34Z) - Chaotic Hedging with Iterated Integrals and Neural Networks [3.3379026542599934]
We show that every $p$-integrable functional of the semimartingale, for $p in [1,infty$, can be represented as a sum of iterated integrals thereof.
We also show that every financial derivative can be approximated arbitrarily well in the $Lp$-sense.
arXiv Detail & Related papers (2022-09-21T07:57:07Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Automatic differentiation for Riemannian optimization on low-rank matrix
and tensor-train manifolds [71.94111815357064]
In scientific computing and machine learning applications, matrices and more general multidimensional arrays (tensors) can often be approximated with the help of low-rank decompositions.
One of the popular tools for finding the low-rank approximations is to use the Riemannian optimization.
arXiv Detail & Related papers (2021-03-27T19:56:00Z) - Mat\'ern Gaussian processes on Riemannian manifolds [81.15349473870816]
We show how to generalize the widely-used Mat'ern class of Gaussian processes.
We also extend the generalization from the Mat'ern to the widely-used squared exponential process.
arXiv Detail & Related papers (2020-06-17T21:05:42Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.