Learning Neural Implicit Representations with Surface Signal
Parameterizations
- URL: http://arxiv.org/abs/2211.00519v2
- Date: Mon, 26 Jun 2023 00:32:56 GMT
- Title: Learning Neural Implicit Representations with Surface Signal
Parameterizations
- Authors: Yanran Guan, Andrei Chubarau, Ruby Rao, Derek Nowrouzezahrai
- Abstract summary: We present a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data.
Our model remains compatible with existing mesh-based digital content with appearance data.
- Score: 14.835882967340968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit surface representations have recently emerged as popular
alternative to explicit 3D object encodings, such as polygonal meshes,
tabulated points, or voxels. While significant work has improved the geometric
fidelity of these representations, much less attention is given to their final
appearance. Traditional explicit object representations commonly couple the 3D
shape data with auxiliary surface-mapped image data, such as diffuse color
textures and fine-scale geometric details in normal maps that typically require
a mapping of the 3D surface onto a plane, i.e., a surface parameterization;
implicit representations, on the other hand, cannot be easily textured due to
lack of configurable surface parameterization. Inspired by this digital content
authoring methodology, we design a neural network architecture that implicitly
encodes the underlying surface parameterization suitable for appearance data.
As such, our model remains compatible with existing mesh-based digital content
with appearance data. Motivated by recent work that overfits compact networks
to individual 3D objects, we present a new weight-encoded neural implicit
representation that extends the capability of neural implicit surfaces to
enable various common and important applications of texture mapping. Our method
outperforms reasonable baselines and state-of-the-art alternatives.
Related papers
- Spurfies: Sparse Surface Reconstruction using Local Geometry Priors [8.260048622127913]
We introduce Spurfies, a novel method for sparse-view surface reconstruction.
It disentangles appearance and geometry information to utilize local geometry priors trained on synthetic data.
We validate our method on the DTU dataset and demonstrate that it outperforms previous state of the art by 35% in surface quality.
arXiv Detail & Related papers (2024-08-29T14:02:47Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - Parameterization-driven Neural Surface Reconstruction for Object-oriented Editing in Neural Rendering [35.69582529609475]
This paper introduces a novel neural algorithm for parameterizing neural implicit surfaces to simple parametric domains like spheres and polycubes.
It computes bi-directional deformation between the object and the domain using a forward mapping from the object's zero level set and an inverse deformation for backward mapping.
We demonstrate the method's effectiveness on images of human heads and man-made objects.
arXiv Detail & Related papers (2023-10-09T08:42:40Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.