Intrinsic Neural Fields: Learning Functions on Manifolds
- URL: http://arxiv.org/abs/2203.07967v2
- Date: Thu, 17 Mar 2022 13:07:55 GMT
- Title: Intrinsic Neural Fields: Learning Functions on Manifolds
- Authors: Lukas Koestler, Daniel Grittner, Michael Moeller, Daniel Cremers,
Zorah L\"ahner
- Abstract summary: Intrinsic neural fields combine the advantages of neural fields with the spectral properties of the Laplace-Beltrami operator.
We show that intrinsic neural fields can reconstruct high-quality textures from images with state-of-the-art quality.
We demonstrate the versatility of intrinsic neural fields by tackling various applications.
- Score: 45.269698580847916
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural fields have gained significant attention in the computer vision
community due to their excellent performance in novel view synthesis, geometry
reconstruction, and generative modeling. Some of their advantages are a sound
theoretic foundation and an easy implementation in current deep learning
frameworks. While neural fields have been applied to signals on manifolds,
e.g., for texture reconstruction, their representation has been limited to
extrinsically embedding the shape into Euclidean space. The extrinsic embedding
ignores known intrinsic manifold properties and is inflexible wrt. transfer of
the learned function. To overcome these limitations, this work introduces
intrinsic neural fields, a novel and versatile representation for neural fields
on manifolds. Intrinsic neural fields combine the advantages of neural fields
with the spectral properties of the Laplace-Beltrami operator. We show
theoretically that intrinsic neural fields inherit many desirable properties of
the extrinsic neural field framework but exhibit additional intrinsic
qualities, like isometry invariance. In experiments, we show intrinsic neural
fields can reconstruct high-fidelity textures from images with state-of-the-art
quality and are robust to the discretization of the underlying manifold. We
demonstrate the versatility of intrinsic neural fields by tackling various
applications: texture transfer between deformed shapes & different shapes,
texture reconstruction from real-world images with view dependence, and
discretization-agnostic learning on meshes and point clouds.
Related papers
- The Dynamic Net Architecture: Learning Robust and Holistic Visual Representations Through Self-Organizing Networks [3.9848584845601014]
We present a novel intelligent-system architecture called "Dynamic Net Architecture" (DNA)
DNA relies on recurrence-stabilized networks and discuss it in application to vision.
arXiv Detail & Related papers (2024-07-08T06:22:10Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - NeuralClothSim: Neural Deformation Fields Meet the Thin Shell Theory [70.10550467873499]
We propose NeuralClothSim, a new quasistatic cloth simulator using thin shells.
Our memory-efficient solver operates on a new continuous coordinate-based surface representation called neural deformation fields.
arXiv Detail & Related papers (2023-08-24T17:59:54Z) - A Hierarchical Architecture for Neural Materials [13.144139872006287]
We introduce a neural appearance model that offers a new level of accuracy.
An inception-based core network structure captures material appearances at multiple scales.
We encode the inputs into frequency space, introduce a gradient-based loss, and employ it adaptive to the progress of the learning phase.
arXiv Detail & Related papers (2023-07-19T17:00:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Texture Generation with Neural Cellular Automata [64.70093734012121]
We learn a texture generator from a single template image.
We make claims that the behaviour exhibited by the NCA model is a learned, distributed, local algorithm to generate a texture.
arXiv Detail & Related papers (2021-05-15T22:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.