NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from
Multi-view Images
- URL: http://arxiv.org/abs/2303.12012v1
- Date: Tue, 21 Mar 2023 16:49:41 GMT
- Title: NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from
Multi-view Images
- Authors: Xiaoxu Meng, Weikai Chen, Bo Yang
- Abstract summary: NeAT is a new neural rendering framework that learns implicit surfaces with arbitrary topologies from multi-view images.
NeAT supports easy field-to-mesh conversion using the classic Marching Cubes algorithm.
Our approach is able to faithfully reconstruct both watertight and non-watertight surfaces.
- Score: 17.637064969966847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent progress in neural implicit functions has set new state-of-the-art in
reconstructing high-fidelity 3D shapes from a collection of images. However,
these approaches are limited to closed surfaces as they require the surface to
be represented by a signed distance field. In this paper, we propose NeAT, a
new neural rendering framework that can learn implicit surfaces with arbitrary
topologies from multi-view images. In particular, NeAT represents the 3D
surface as a level set of a signed distance function (SDF) with a validity
branch for estimating the surface existence probability at the query positions.
We also develop a novel neural volume rendering method, which uses SDF and
validity to calculate the volume opacity and avoids rendering points with low
validity. NeAT supports easy field-to-mesh conversion using the classic
Marching Cubes algorithm. Extensive experiments on DTU, MGN, and Deep Fashion
3D datasets indicate that our approach is able to faithfully reconstruct both
watertight and non-watertight surfaces. In particular, NeAT significantly
outperforms the state-of-the-art methods in the task of open surface
reconstruction both quantitatively and qualitatively.
Related papers
- NeUDF: Leaning Neural Unsigned Distance Fields with Volume Rendering [25.078149064632218]
NeUDF can reconstruct surfaces with arbitrary topologies solely from multi-view supervision.
We extensively evaluate our method over a number of challenging datasets, including DTU, MGN, and Deep Fashion 3D.
arXiv Detail & Related papers (2023-04-20T04:14:42Z) - NeuralUDF: Learning Unsigned Distance Fields for Multi-view
Reconstruction of Surfaces with Arbitrary Topologies [87.06532943371575]
We present a novel method, called NeuralUDF, for reconstructing surfaces with arbitrary topologies from 2D images via volume rendering.
In this paper, we propose to represent surfaces as the Unsigned Distance Function (UDF) and develop a new volume rendering scheme to learn the neural UDF representation.
arXiv Detail & Related papers (2022-11-25T15:21:45Z) - Recovering Fine Details for Neural Implicit Surface Reconstruction [3.9702081347126943]
We present D-NeuS, a volume rendering neural implicit surface reconstruction method capable to recover fine geometry details.
We impose multi-view feature consistency on the surface points, derived by interpolating SDF zero-crossings from sampled points along rays.
Our method reconstructs high-accuracy surfaces with details, and outperforms the state of the art.
arXiv Detail & Related papers (2022-11-21T10:06:09Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z) - Deep Implicit Surface Point Prediction Networks [49.286550880464866]
Deep neural representations of 3D shapes as implicit functions have been shown to produce high fidelity models.
This paper presents a novel approach that models such surfaces using a new class of implicit representations called the closest surface-point (CSP) representation.
arXiv Detail & Related papers (2021-06-10T14:31:54Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Neural-Pull: Learning Signed Distance Functions from Point Clouds by
Learning to Pull Space onto Surfaces [68.12457459590921]
Reconstructing continuous surfaces from 3D point clouds is a fundamental operation in 3D geometry processing.
We introduce textitNeural-Pull, a new approach that is simple and leads to high quality SDFs.
arXiv Detail & Related papers (2020-11-26T23:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.