StEik: Stabilizing the Optimization of Neural Signed Distance Functions
and Finer Shape Representation
- URL: http://arxiv.org/abs/2305.18414v3
- Date: Sat, 11 Nov 2023 15:42:04 GMT
- Title: StEik: Stabilizing the Optimization of Neural Signed Distance Functions
and Finer Shape Representation
- Authors: Huizong Yang, Yuxin Sun, Ganesh Sundaramoorthi, Anthony Yezzi
- Abstract summary: We show analytically that as the representation power of the network increases, the optimization approaches a partial differential equation (PDE) in the continuum limit that is unstable.
We show that this instability can manifest in existing network optimization, leading to irregularities in the reconstructed surface and/or convergence to sub-optimal local minima.
We introduce a new regularization term that still counteracts the eikonal instability but without over-regularizing.
- Score: 12.564019188842861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present new insights and a novel paradigm (StEik) for learning implicit
neural representations (INR) of shapes. In particular, we shed light on the
popular eikonal loss used for imposing a signed distance function constraint in
INR. We show analytically that as the representation power of the network
increases, the optimization approaches a partial differential equation (PDE) in
the continuum limit that is unstable. We show that this instability can
manifest in existing network optimization, leading to irregularities in the
reconstructed surface and/or convergence to sub-optimal local minima, and thus
fails to capture fine geometric and topological structure. We show analytically
how other terms added to the loss, currently used in the literature for other
purposes, can actually eliminate these instabilities. However, such terms can
over-regularize the surface, preventing the representation of fine shape
detail. Based on a similar PDE theory for the continuum limit, we introduce a
new regularization term that still counteracts the eikonal instability but
without over-regularizing. Furthermore, since stability is now guaranteed in
the continuum limit, this stabilization also allows for considering new network
structures that are able to represent finer shape detail. We introduce such a
structure based on quadratic layers. Experiments on multiple benchmark data
sets show that our new regularization and network are able to capture more
precise shape details and more accurate topology than existing
state-of-the-art.
Related papers
- Occlusion-aware Non-Rigid Point Cloud Registration via Unsupervised Neural Deformation Correntropy [25.660967523504855]
Occlusion-Aware Registration (OAR) is an unsupervised method for non-rigidly aligning point clouds.
We present a theoretical analysis and establish the relationship between the maximum correntropy criterion and the commonly used Chamfer distance.
Our method achieves superior or competitive performance compared to existing approaches.
arXiv Detail & Related papers (2025-02-15T07:27:15Z) - Geometric Neural Process Fields [58.77241763774756]
Geometric Neural Process Fields (G-NPF) is a probabilistic framework for neural radiance fields that explicitly captures uncertainty.
Building on these bases, we design a hierarchical latent variable model, allowing G-NPF to integrate structural information across multiple spatial levels.
Experiments on novel-view synthesis for 3D scenes, as well as 2D image and 1D signal regression, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2025-02-04T14:17:18Z) - Point Cloud Denoising With Fine-Granularity Dynamic Graph Convolutional Networks [58.050130177241186]
Noise perturbations often corrupt 3-D point clouds, hindering downstream tasks such as surface reconstruction, rendering, and further processing.
This paper introduces finegranularity dynamic graph convolutional networks called GDGCN, a novel approach to denoising in 3-D point clouds.
arXiv Detail & Related papers (2024-11-21T14:19:32Z) - Edge of stability echo state networks [5.888495030452654]
Echo State Networks (ESNs) are time-series processing models working under the Echo State Property (ESP) principle.
We introduce a new ESN architecture, called the Edge of Stability Echo State Network (ES$2$N)
arXiv Detail & Related papers (2023-08-05T15:49:25Z) - Neural Delay Differential Equations: System Reconstruction and Image
Classification [14.59919398960571]
We propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs)
Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations.
We achieve lower loss and higher accuracy not only for the data produced synthetically but also for the CIFAR10, a well-known image dataset.
arXiv Detail & Related papers (2023-04-11T16:09:28Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - A PDE-based Explanation of Extreme Numerical Sensitivities and Edge of Stability in Training Neural Networks [12.355137704908042]
We show restrained numerical instabilities in current training practices of deep networks with gradient descent (SGD)
We do this by presenting a theoretical framework using numerical analysis of partial differential equations (PDE), and analyzing the gradient descent PDE of convolutional neural networks (CNNs)
We show this is a consequence of the non-linear PDE associated with the descent of the CNN, whose local linearization changes when over-driving the step size of the discretization resulting in a stabilizing effect.
arXiv Detail & Related papers (2022-06-04T14:54:05Z) - Extended Unconstrained Features Model for Exploring Deep Neural Collapse [59.59039125375527]
Recently, a phenomenon termed "neural collapse" (NC) has been empirically observed in deep neural networks.
Recent papers have shown that minimizers with this structure emerge when optimizing a simplified "unconstrained features model"
In this paper, we study the UFM for the regularized MSE loss, and show that the minimizers' features can be more structured than in the cross-entropy case.
arXiv Detail & Related papers (2022-02-16T14:17:37Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Phase Transitions, Distance Functions, and Implicit Neural
Representations [26.633795221150475]
Implicit Neural Representations (INRs) serve numerous downstream applications in geometric deep learning and 3D vision.
We suggest a loss for training INRs that learns a density function that converges to a proper occupancy function, while its log transform converges to a distance function.
arXiv Detail & Related papers (2021-06-14T18:13:45Z) - On the Stability Properties and the Optimization Landscape of Training
Problems with Squared Loss for Neural Networks and General Nonlinear Conic
Approximation Schemes [0.0]
We study the optimization landscape and the stability properties of training problems with squared loss for neural networks and general nonlinear conic approximation schemes.
We prove that the same effects that are responsible for these instability properties are also the reason for the emergence of saddle points and spurious local minima.
arXiv Detail & Related papers (2020-11-06T11:34:59Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.