AutoInt: Automatic Integration for Fast Neural Volume Rendering
- URL: http://arxiv.org/abs/2012.01714v1
- Date: Thu, 3 Dec 2020 05:46:10 GMT
- Title: AutoInt: Automatic Integration for Fast Neural Volume Rendering
- Authors: David B. Lindell, Julien N. P. Martel, Gordon Wetzstein
- Abstract summary: We propose a new framework for learning efficient, closed-form solutions to integrals using implicit neural representation networks.
We demonstrate a greater than 10x improvement in photorealistic requirements, enabling fast neural volume rendering.
- Score: 51.46232518888791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerical integration is a foundational technique in scientific computing and
is at the core of many computer vision applications. Among these applications,
implicit neural volume rendering has recently been proposed as a new paradigm
for view synthesis, achieving photorealistic image quality. However, a
fundamental obstacle to making these methods practical is the extreme
computational and memory requirements caused by the required volume
integrations along the rendered rays during training and inference. Millions of
rays, each requiring hundreds of forward passes through a neural network are
needed to approximate those integrations with Monte Carlo sampling. Here, we
propose automatic integration, a new framework for learning efficient,
closed-form solutions to integrals using implicit neural representation
networks. For training, we instantiate the computational graph corresponding to
the derivative of the implicit neural representation. The graph is fitted to
the signal to integrate. After optimization, we reassemble the graph to obtain
a network that represents the antiderivative. By the fundamental theorem of
calculus, this enables the calculation of any definite integral in two
evaluations of the network. Using this approach, we demonstrate a greater than
10x improvement in computation requirements, enabling fast neural volume
rendering.
Related papers
- Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Fixed Integral Neural Networks [2.2118683064997273]
We present a method for representing the analytical integral of a learned function $f$.
This allows the exact integral of a neural network to be computed, and enables constrained neural networks to be parametrised.
We also introduce a method to constrain $f$ to be positive, a necessary condition for many applications.
arXiv Detail & Related papers (2023-07-26T18:16:43Z) - Neural Network Representation of Time Integrators [0.0]
The network weights and biases are given, i.e., no training is needed.
The architecture required for the integration of a simple mass-damper-stiffness case is included as an example.
arXiv Detail & Related papers (2022-11-30T14:38:59Z) - Interactive Volume Visualization via Multi-Resolution Hash Encoding
based Neural Representation [29.797933404619606]
We show that we can interactively ray trace volumetric neural representations (10-60fps) using modern GPU cores and a well-designed rendering algorithm.
Our neural representations are also high-fidelity teracell (PSNR > 30dB) and compact (10-1000x smaller)
To support extreme-scale volume data, we also develop an efficient out-of-core training strategy, which allows our neural representation training to potentially scale up to terascale.
arXiv Detail & Related papers (2022-07-23T23:04:19Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - Integration of Leaky-Integrate-and-Fire-Neurons in Deep Learning
Architectures [0.0]
We show that biologically inspired neuron models provide novel and efficient ways of information encoding.
We derived simple update-rules for the LIF units from the differential equations, which are easy to numerically integrate.
We apply our method to the IRIS blossoms image data set and show that the training technique can be used to train LIF neurons on image classification tasks.
arXiv Detail & Related papers (2020-04-28T13:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.