Spelunking the Deep: Guaranteed Queries for General Neural Implicit
Surfaces
- URL: http://arxiv.org/abs/2202.02444v1
- Date: Sat, 5 Feb 2022 00:37:08 GMT
- Title: Spelunking the Deep: Guaranteed Queries for General Neural Implicit
Surfaces
- Authors: Nicholas Sharp, Alec Jacobson
- Abstract summary: This work presents a new approach to perform queries directly on general neural implicit functions for a wide range of existing architectures.
Our key tool is the application of range analysis to neural networks, using automatic arithmetic rules to bound the output of a network over a region.
We use the resulting bounds to develop queries including ray casting, intersection testing, constructing spatial hierarchies, fast mesh extraction, closest-point evaluation.
- Score: 35.438964954948574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit representations, which encode a surface as the level set of a
neural network applied to spatial coordinates, have proven to be remarkably
effective for optimizing, compressing, and generating 3D geometry. Although
these representations are easy to fit, it is not clear how to best evaluate
geometric queries on the shape, such as intersecting against a ray or finding a
closest point. The predominant approach is to encourage the network to have a
signed distance property. However, this property typically holds only
approximately, leading to robustness issues, and holds only at the conclusion
of training, inhibiting the use of queries in loss functions. Instead, this
work presents a new approach to perform queries directly on general neural
implicit functions for a wide range of existing architectures. Our key tool is
the application of range analysis to neural networks, using automatic
arithmetic rules to bound the output of a network over a region; we conduct a
study of range analysis on neural networks, and identify variants of affine
arithmetic which are highly effective. We use the resulting bounds to develop
geometric queries including ray casting, intersection testing, constructing
spatial hierarchies, fast mesh extraction, closest-point evaluation, evaluating
bulk properties, and more. Our queries can be efficiently evaluated on GPUs,
and offer concrete accuracy guarantees even on randomly-initialized networks,
enabling their use in training objectives and beyond. We also show a
preliminary application to inverse rendering.
Related papers
- Convexity in ReLU Neural Networks: beyond ICNNs? [17.01649106055384]
We show that every convex function implemented by a 1-hidden-layer ReLU network can be expressed by an ICNN with the same architecture.
We also provide a numerical procedure that allows an exact check of convexity for ReLU neural networks with a large number of affine regions.
arXiv Detail & Related papers (2025-01-06T13:53:59Z) - A simple algorithm for output range analysis for deep neural networks [0.0]
This paper presents a novel approach for the output range estimation problem in Deep Neural Networks (DNNs) by integrating a Simulated Annealing (SA) algorithm.
The method effectively addresses the challenges by the lack of geometric information and non-linearity inherent inResNets.
arXiv Detail & Related papers (2024-07-02T22:47:40Z) - Does a sparse ReLU network training problem always admit an optimum? [0.0]
We show that the existence of an optimal solution is not always guaranteed, especially in the context of sparse ReLU neural networks.
In particular, we first show that optimization problems involving deep networks with certain sparsity patterns do not always have optimal parameters.
arXiv Detail & Related papers (2023-06-05T08:01:50Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Critical Initialization of Wide and Deep Neural Networks through Partial
Jacobians: General Theory and Applications [6.579523168465526]
We introduce emphpartial Jacobians of a network, defined as derivatives of preactivations in layer $l$ with respect to preactivations in layer $l_0leq l$.
We derive recurrence relations for the norms of partial Jacobians and utilize these relations to analyze criticality of deep fully connected neural networks with LayerNorm and/or residual connections.
arXiv Detail & Related papers (2021-11-23T20:31:42Z) - Sign-Agnostic CONet: Learning Implicit Surface Reconstructions by
Sign-Agnostic Optimization of Convolutional Occupancy Networks [39.65056638604885]
We learn implicit surface reconstruction by sign-agnostic optimization of convolutional occupancy networks.
We show this goal can be effectively achieved by a simple yet effective design.
arXiv Detail & Related papers (2021-05-08T03:35:32Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.