Automatic Differentiation in ROOT
- URL: http://arxiv.org/abs/2004.04435v1
- Date: Thu, 9 Apr 2020 09:18:50 GMT
- Title: Automatic Differentiation in ROOT
- Authors: Vassil Vassilev (1), Aleksandr Efremov (1) and Oksana Shadura (2) ((1)
Princeton University, (2) University of Nebraska Lincoln)
- Abstract summary: In mathematics and computer algebra, automatic differentiation (AD) is a set of techniques to evaluate the derivative of a function specified by a computer program.
This paper presents AD techniques available in ROOT, supported by Cling, to produce derivatives of arbitrary C/C++ functions.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In mathematics and computer algebra, automatic differentiation (AD) is a set
of techniques to evaluate the derivative of a function specified by a computer
program. AD exploits the fact that every computer program, no matter how
complicated, executes a sequence of elementary arithmetic operations (addition,
subtraction, multiplication, division, etc.), elementary functions (exp, log,
sin, cos, etc.) and control flow statements. AD takes source code of a function
as input and produces source code of the derived function. By applying the
chain rule repeatedly to these operations, derivatives of arbitrary order can
be computed automatically, accurately to working precision, and using at most a
small constant factor more arithmetic operations than the original program.
This paper presents AD techniques available in ROOT, supported by Cling, to
produce derivatives of arbitrary C/C++ functions through implementing source
code transformation and employing the chain rule of differential calculus in
both forward mode and reverse mode. We explain its current integration for
gradient computation in TFormula. We demonstrate the correctness and
performance improvements in ROOT's fitting algorithms.
Related papers
- CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - A Constrained BA Algorithm for Rate-Distortion and Distortion-Rate
Functions [13.570794979535934]
modification of Blahut-Arimoto (BA) algorithm for rate-distortion functions.
modified algorithm directly computes the RD function for a given target distortion.
arXiv Detail & Related papers (2023-05-04T08:41:03Z) - Efficient and Sound Differentiable Programming in a Functional
Array-Processing Language [4.1779847272994495]
Automatic differentiation (AD) is a technique for computing the derivative of a function represented by a program.
We present an AD system for a higher-order functional array-processing language.
In combination, computation with forward-mode AD can be as efficient as reverse mode.
arXiv Detail & Related papers (2022-12-20T14:54:47Z) - Neural Network Verification as Piecewise Linear Optimization:
Formulations for the Composition of Staircase Functions [2.088583843514496]
We present a technique for neural network verification using mixed-integer programming (MIP) formulations.
We derive a strong formulation for each neuron in a network using piecewise linear activation functions.
We also derive a separation procedure that runs in super-linear time in the input dimension.
arXiv Detail & Related papers (2022-11-27T03:25:48Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Bias-Scalable Near-Memory CMOS Analog Processor for Machine Learning [6.548257506132353]
Bias-scalable approximate analog computing is attractive for implementing machine learning (ML) processors with distinct power-performance specifications.
We demonstrate the implementation of bias-scalable approximate analog computing circuits using the generalization of the margin-propagation principle.
arXiv Detail & Related papers (2022-02-10T13:26:00Z) - Automatic differentiation for Riemannian optimization on low-rank matrix
and tensor-train manifolds [71.94111815357064]
In scientific computing and machine learning applications, matrices and more general multidimensional arrays (tensors) can often be approximated with the help of low-rank decompositions.
One of the popular tools for finding the low-rank approximations is to use the Riemannian optimization.
arXiv Detail & Related papers (2021-03-27T19:56:00Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.