Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks
- URL: http://arxiv.org/abs/2006.13782v3
- Date: Thu, 27 May 2021 13:56:24 GMT
- Title: Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks
- Authors: Francis Williams, Matthew Trager, Joan Bruna, Denis Zorin
- Abstract summary: We present Neural Splines, a technique for 3D surface reconstruction that is based on random feature kernels arising from infinitely-wide shallow ReLU networks.
Our method achieves state-of-the-art results, outperforming recent neural network-based techniques and widely used Poisson Surface Reconstruction.
- Score: 61.07202852469595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Neural Splines, a technique for 3D surface reconstruction that is
based on random feature kernels arising from infinitely-wide shallow ReLU
networks. Our method achieves state-of-the-art results, outperforming recent
neural network-based techniques and widely used Poisson Surface Reconstruction
(which, as we demonstrate, can also be viewed as a type of kernel method).
Because our approach is based on a simple kernel formulation, it is easy to
analyze and can be accelerated by general techniques designed for kernel-based
learning. We provide explicit analytical expressions for our kernel and argue
that our formulation can be seen as a generalization of cubic spline
interpolation to higher dimensions. In particular, the RKHS norm associated
with Neural Splines biases toward smooth interpolants.
Related papers
- Wide Neural Networks as Gaussian Processes: Lessons from Deep
Equilibrium Models [16.07760622196666]
We study the deep equilibrium model (DEQ), an infinite-depth neural network with shared weight matrices across layers.
Our analysis reveals that as the width of DEQ layers approaches infinity, it converges to a Gaussian process.
Remarkably, this convergence holds even when the limits of depth and width are interchanged.
arXiv Detail & Related papers (2023-10-16T19:00:43Z) - An Exact Kernel Equivalence for Finite Classification Models [1.4777718769290527]
We compare our exact representation to the well-known Neural Tangent Kernel (NTK) and discuss approximation error relative to the NTK.
We use this exact kernel to show that our theoretical contribution can provide useful insights into the predictions made by neural networks.
arXiv Detail & Related papers (2023-08-01T20:22:53Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Neural Networks with Sparse Activation Induced by Large Bias: Tighter Analysis with Bias-Generalized NTK [86.45209429863858]
We study training one-hidden-layer ReLU networks in the neural tangent kernel (NTK) regime.
We show that the neural networks possess a different limiting kernel which we call textitbias-generalized NTK
We also study various properties of the neural networks with this new kernel.
arXiv Detail & Related papers (2023-01-01T02:11:39Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Incorporating Prior Knowledge into Neural Networks through an Implicit
Composite Kernel [1.6383321867266318]
Implicit Composite Kernel (ICK) is a kernel that combines a kernel implicitly defined by a neural network with a second kernel function chosen to model known properties.
We demonstrate ICK's superior performance and flexibility on both synthetic and real-world data sets.
arXiv Detail & Related papers (2022-05-15T21:32:44Z) - Neural Fields as Learnable Kernels for 3D Reconstruction [101.54431372685018]
We present a novel method for reconstructing implicit 3D shapes based on a learned kernel ridge regression.
Our technique achieves state-of-the-art results when reconstructing 3D objects and large scenes from sparse oriented points.
arXiv Detail & Related papers (2021-11-26T18:59:04Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Iso-Points: Optimizing Neural Implicit Surfaces with Hybrid
Representations [21.64457003420851]
We develop a hybrid neural surface representation that allows us to impose geometry-aware sampling and regularization.
We demonstrate that our method can be adopted to improve techniques for reconstructing neural implicit surfaces from multi-view images or point clouds.
arXiv Detail & Related papers (2020-12-11T15:51:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.