NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild
- URL: http://arxiv.org/abs/2110.07604v3
- Date: Mon, 18 Oct 2021 04:03:39 GMT
- Title: NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild
- Authors: Jason Y. Zhang, Gengshan Yang, Shubham Tulsiani, Deva Ramanan
- Abstract summary: We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
- Score: 80.09093712055682
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent history has seen a tremendous growth of work exploring implicit
representations of geometry and radiance, popularized through Neural Radiance
Fields (NeRF). Such works are fundamentally based on a (implicit) volumetric
representation of occupancy, allowing them to model diverse scene structure
including translucent objects and atmospheric obscurants. But because the vast
majority of real-world scenes are composed of well-defined surfaces, we
introduce a surface analog of such implicit models called Neural Reflectance
Surfaces (NeRS). NeRS learns a neural shape representation of a closed surface
that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
Even more importantly, surface parameterizations allow NeRS to learn (neural)
bidirectional surface reflectance functions (BRDFs) that factorize
view-dependent appearance into environmental illumination, diffuse color
(albedo), and specular "shininess." Finally, rather than illustrating our
results on synthetic scenes or controlled in-the-lab capture, we assemble a
novel dataset of multi-view images from online marketplaces for selling goods.
Such "in-the-wild" multi-view image sets pose a number of challenges, including
a small number of views with unknown/rough camera estimates. We demonstrate
that surface-based neural reconstructions enable learning from such data,
outperforming volumetric neural rendering-based reconstructions. We hope that
NeRS serves as a first step toward building scalable, high-quality libraries of
real-world shape, materials, and illumination. The project page with code and
video visualizations can be found at https://jasonyzhang.com/ners.
Related papers
- Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - ENVIDR: Implicit Differentiable Renderer with Neural Environment
Lighting [9.145875902703345]
We introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
We first propose a novel neural with decomposed rendering to learn the interaction between surface and environment lighting.
We then propose an SDF-based neural surface model that leverages this learned neural to represent general scenes.
arXiv Detail & Related papers (2023-03-23T04:12:07Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - UNISURF: Unifying Neural Implicit Surfaces and Radiance Fields for
Multi-View Reconstruction [61.17219252031391]
We present a novel method for reconstructing surfaces from multi-view images using Neural implicit 3D representations.
Our key insight is that implicit surface models and radiance fields can be formulated in a unified way, enabling both surface and volume rendering.
Our experiments demonstrate that we outperform NeRF in terms of reconstruction quality while performing on par with IDR without requiring masks.
arXiv Detail & Related papers (2021-04-20T15:59:38Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Multiview Neural Surface Reconstruction by Disentangling Geometry and
Appearance [46.488713939892136]
We introduce a neural network that simultaneously learns the unknown geometry, camera parameters, and a neural architecture that approximates the light reflected from the surface towards the camera.
We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera materials from the DTU MVS dataset.
arXiv Detail & Related papers (2020-03-22T10:20:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.