Fixed Integral Neural Networks
- URL: http://arxiv.org/abs/2307.14439v4
- Date: Sun, 24 Dec 2023 02:49:37 GMT
- Title: Fixed Integral Neural Networks
- Authors: Ryan Kortvelesy
- Abstract summary: We present a method for representing the analytical integral of a learned function $f$.
This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised.
We also introduce a method to constrain $f$ to be positive, a necessary condition for many applications.
- Score: 2.2118683064997273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is often useful to perform integration over learned functions represented
by neural networks. However, this integration is usually performed numerically,
as analytical integration over learned functions (especially neural networks)
is generally viewed as intractable. In this work, we present a method for
representing the analytical integral of a learned function $f$. This allows the
exact integral of a neural network to be computed, and enables constrained
neural networks to be parametrised by applying constraints directly to the
integral. Crucially, we also introduce a method to constrain $f$ to be
positive, a necessary condition for many applications (e.g. probability
distributions, distance metrics, etc). Finally, we introduce several
applications where our fixed-integral neural network (FINN) can be utilised.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Reachability analysis of neural networks using mixed monotonicity [0.0]
We present a new reachability analysis tool to compute an interval over-approximation of the output set of a feedforward neural network under given input uncertainty.
The proposed approach adapts to neural networks an existing mixed-monotonicity method for the reachability analysis of dynamical systems.
arXiv Detail & Related papers (2021-11-15T11:35:18Z) - Optimal Approximation with Sparse Neural Networks and Applications [0.0]
We use deep sparsely connected neural networks to measure the complexity of a function class in $L(mathbb Rd)$.
We also introduce representation system - a countable collection of functions to guide neural networks.
We then analyse the complexity of a class called $beta$ cartoon-like functions using rate-distortion theory and wedgelets construction.
arXiv Detail & Related papers (2021-08-14T05:14:13Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - AutoInt: Automatic Integration for Fast Neural Volume Rendering [51.46232518888791]
We propose a new framework for learning efficient, closed-form solutions to integrals using implicit neural representation networks.
We demonstrate a greater than 10x improvement in photorealistic requirements, enabling fast neural volume rendering.
arXiv Detail & Related papers (2020-12-03T05:46:10Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Q-NET: A Network for Low-Dimensional Integrals of Neural Proxies [1.63460693863947]
We propose a versatile yet simple class of artificial neural networks -- sigmoidal universal approximators -- as a proxy for functions whose integrals need to be estimated.
We design a family of fixed networks, which we call Q-NETs, that operate on parameters of a trained proxy to calculate exact integrals.
We highlight the benefits of this scheme for a few applications such as inverse rendering, generation of procedural noise, visualization and simulation.
arXiv Detail & Related papers (2020-06-25T13:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.