Multiview Neural Surface Reconstruction by Disentangling Geometry and
Appearance
- URL: http://arxiv.org/abs/2003.09852v3
- Date: Sun, 25 Oct 2020 10:30:06 GMT
- Title: Multiview Neural Surface Reconstruction by Disentangling Geometry and
Appearance
- Authors: Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen
Basri, Yaron Lipman
- Abstract summary: We introduce a neural network that simultaneously learns the unknown geometry, camera parameters, and a neural architecture that approximates the light reflected from the surface towards the camera.
We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera materials from the DTU MVS dataset.
- Score: 46.488713939892136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we address the challenging problem of multiview 3D surface
reconstruction. We introduce a neural network architecture that simultaneously
learns the unknown geometry, camera parameters, and a neural renderer that
approximates the light reflected from the surface towards the camera. The
geometry is represented as a zero level-set of a neural network, while the
neural renderer, derived from the rendering equation, is capable of
(implicitly) modeling a wide set of lighting conditions and materials. We
trained our network on real world 2D images of objects with different material
properties, lighting conditions, and noisy camera initializations from the DTU
MVS dataset. We found our model to produce state of the art 3D surface
reconstructions with high fidelity, resolution and detail.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - SR-CurvANN: Advancing 3D Surface Reconstruction through Curvature-Aware Neural Networks [0.0]
SR-CurvANN is a novel method that incorporates neural network-based 2D inpainting to effectively reconstruct 3D surfaces.
We show that SR-CurvANN excels in the shape completion process, filling holes with a remarkable level of realism and precision.
arXiv Detail & Related papers (2024-07-25T09:36:37Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos [82.74918564737591]
We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input.
Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to competing Neural Field approaches.
arXiv Detail & Related papers (2022-10-22T04:57:55Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.