Q-NET: A Network for Low-Dimensional Integrals of Neural Proxies
- URL: http://arxiv.org/abs/2006.14396v2
- Date: Tue, 30 Mar 2021 10:53:40 GMT
- Title: Q-NET: A Network for Low-Dimensional Integrals of Neural Proxies
- Authors: Kartic Subr
- Abstract summary: We propose a versatile yet simple class of artificial neural networks -- sigmoidal universal approximators -- as a proxy for functions whose integrals need to be estimated.
We design a family of fixed networks, which we call Q-NETs, that operate on parameters of a trained proxy to calculate exact integrals.
We highlight the benefits of this scheme for a few applications such as inverse rendering, generation of procedural noise, visualization and simulation.
- Score: 1.63460693863947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many applications require the calculation of integrals of multidimensional
functions. A general and popular procedure is to estimate integrals by
averaging multiple evaluations of the function. Often, each evaluation of the
function entails costly computations. The use of a \emph{proxy} or surrogate
for the true function is useful if repeated evaluations are necessary. The
proxy is even more useful if its integral is known analytically and can be
calculated practically. We propose the use of a versatile yet simple class of
artificial neural networks -- sigmoidal universal approximators -- as a proxy
for functions whose integrals need to be estimated. We design a family of fixed
networks, which we call Q-NETs, that operate on parameters of a trained proxy
to calculate exact integrals over \emph{any subset of dimensions} of the input
domain. We identify transformations to the input space for which integrals may
be recalculated without resampling the integrand or retraining the proxy. We
highlight the benefits of this scheme for a few applications such as inverse
rendering, generation of procedural noise, visualization and simulation. The
proposed proxy is appealing in the following contexts: the dimensionality is
low ($<10$D); the estimation of integrals needs to be decoupled from the
sampling strategy; sparse, adaptive sampling is used; marginal functions need
to be known in functional form; or when powerful Single Instruction Multiple
Data/Thread (SIMD/SIMT) pipelines are available for computation.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Fixed Integral Neural Networks [2.2118683064997273]
We present a method for representing the analytical integral of a learned function $f$.
This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised.
We also introduce a method to constrain $f$ to be positive, a necessary condition for many applications.
arXiv Detail & Related papers (2023-07-26T18:16:43Z) - Efficient Parametric Approximations of Neural Network Function Space
Distance [6.117371161379209]
It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset.
We consider estimating the Function Space Distance (FSD) over a training set, i.e. the average discrepancy between the outputs of two neural networks.
We propose a Linearized Activation TRick (LAFTR) and derive an efficient approximation to FSD for ReLU neural networks.
arXiv Detail & Related papers (2023-02-07T15:09:23Z) - Computing Anti-Derivatives using Deep Neural Networks [3.42658286826597]
This paper presents a novel algorithm to obtain the closed-form anti-derivative of a function using Deep Neural Network architecture.
We claim that using a single method for all integrals, our algorithm can approximate anti-derivatives to any required accuracy.
This paper also shows the applications of our method to get the closed-form expressions of elliptic integrals, Fermi-Dirac integrals, and cumulative distribution functions.
arXiv Detail & Related papers (2022-09-19T15:16:47Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - AutoInt: Automatic Integration for Fast Neural Volume Rendering [51.46232518888791]
We propose a new framework for learning efficient, closed-form solutions to integrals using implicit neural representation networks.
We demonstrate a greater than 10x improvement in photorealistic requirements, enabling fast neural volume rendering.
arXiv Detail & Related papers (2020-12-03T05:46:10Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.