Phase Transitions, Distance Functions, and Implicit Neural
Representations
- URL: http://arxiv.org/abs/2106.07689v1
- Date: Mon, 14 Jun 2021 18:13:45 GMT
- Title: Phase Transitions, Distance Functions, and Implicit Neural
Representations
- Authors: Yaron Lipman
- Abstract summary: Implicit Neural Representations (INRs) serve numerous downstream applications in geometric deep learning and 3D vision.
We suggest a loss for training INRs that learns a density function that converges to a proper occupancy function, while its log transform converges to a distance function.
- Score: 26.633795221150475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representing surfaces as zero level sets of neural networks recently emerged
as a powerful modeling paradigm, named Implicit Neural Representations (INRs),
serving numerous downstream applications in geometric deep learning and 3D
vision. Training INRs previously required choosing between occupancy and
distance function representation and different losses with unknown limit
behavior and/or bias. In this paper we draw inspiration from the theory of
phase transitions of fluids and suggest a loss for training INRs that learns a
density function that converges to a proper occupancy function, while its log
transform converges to a distance function. Furthermore, we analyze the limit
minimizer of this loss showing it satisfies the reconstruction constraints and
has minimal surface perimeter, a desirable inductive bias for surface
reconstruction. Training INRs with this new loss leads to state-of-the-art
reconstructions on a standard benchmark.
Related papers
- Topological obstruction to the training of shallow ReLU neural networks [0.0]
We study the interplay between the geometry of the loss landscape and the optimization trajectories of simple neural networks.
This paper reveals the presence of topological obstruction in the loss landscape of shallow ReLU neural networks trained using gradient flow.
arXiv Detail & Related papers (2024-10-18T19:17:48Z) - Principal Component Flow Map Learning of PDEs from Incomplete, Limited, and Noisy Data [0.0]
We present a computational technique for modeling the evolution of dynamical systems in a reduced basis.
We focus on the challenging problem of modeling partially-observed partial differential equations (PDEs) on high-dimensional non-uniform grids.
We present a neural network structure that is suitable for PDE modeling with noisy and limited data available only on a subset of the state variables or computational domain.
arXiv Detail & Related papers (2024-07-15T16:06:20Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Supervised Contrastive Representation Learning: Landscape Analysis with
Unconstrained Features [33.703796571991745]
Recent findings reveal that overparameterized deep neural networks, trained beyond zero training, exhibit a distinctive structural pattern at the final layer.
These results indicate that the final-layer outputs in such networks display minimal within-class variations.
arXiv Detail & Related papers (2024-02-29T06:02:45Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Critical Investigation of Failure Modes in Physics-informed Neural
Networks [0.9137554315375919]
We show that a physics-informed neural network with a composite formulation produces highly non- learned loss surfaces that are difficult to optimize.
We also assess the training both approaches on two elliptic problems with increasingly complex target solutions.
arXiv Detail & Related papers (2022-06-20T18:43:35Z) - Extended Unconstrained Features Model for Exploring Deep Neural Collapse [59.59039125375527]
Recently, a phenomenon termed "neural collapse" (NC) has been empirically observed in deep neural networks.
Recent papers have shown that minimizers with this structure emerge when optimizing a simplified "unconstrained features model"
In this paper, we study the UFM for the regularized MSE loss, and show that the minimizers' features can be more structured than in the cross-entropy case.
arXiv Detail & Related papers (2022-02-16T14:17:37Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.