PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo
- URL: http://arxiv.org/abs/2207.11406v1
- Date: Sat, 23 Jul 2022 03:55:18 GMT
- Title: PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo
- Authors: Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K.
Wong
- Abstract summary: We present a neural inverse rendering method for MVPS based on implicit representation.
Our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.
- Score: 22.42916940712357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional multi-view photometric stereo (MVPS) methods are often composed
of multiple disjoint stages, resulting in noticeable accumulated errors. In
this paper, we present a neural inverse rendering method for MVPS based on
implicit representation. Given multi-view images of a non-Lambertian object
illuminated by multiple unknown directional lights, our method jointly
estimates the geometry, materials, and lights. Our method first employs
multi-light images to estimate per-view surface normal maps, which are used to
regularize the normals derived from the neural radiance field. It then jointly
optimizes the surface normals, spatially-varying BRDFs, and lights based on a
shadow-aware differentiable rendering layer. After optimization, the
reconstructed object can be used for novel-view rendering, relighting, and
material editing. Experiments on both synthetic and real datasets demonstrate
that our method achieves far more accurate shape reconstruction than existing
MVPS and neural rendering methods. Our code and model can be found at
https://ywq.github.io/psnerf.
Related papers
- NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo [32.39157133181186]
We present a novel multi-view photometric stereo (MVPS) method.
Our work differs from the state-of-the-art multi-view PS-NeRF or Supernormal in that we explicitly leverage per-pixel intensity renderings.
Our method is among the first (along with Supernormal) to outperform the classical MVPS approach proposed by the DiLiGenT-MV benchmark.
arXiv Detail & Related papers (2024-05-20T14:26:07Z) - NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering [23.482941494283978]
This paper presents a method, namely NeuS-PIR, for recovering relightable neural surfaces from multi-view images or video.
Unlike methods based on NeRF and discrete meshes, our method utilizes implicit neural surface representation to reconstruct high-quality geometry.
Our method enables advanced applications such as relighting, which can be seamlessly integrated with modern graphics engines.
arXiv Detail & Related papers (2023-06-13T09:02:57Z) - NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field
Indirect Illumination [48.42173911185454]
Inverse rendering methods aim to estimate geometry, materials and illumination from multi-view RGB images.
We propose an end-to-end inverse rendering pipeline that decomposes materials and illumination from multi-view images.
arXiv Detail & Related papers (2023-03-29T12:05:19Z) - Physics-based Indirect Illumination for Inverse Rendering [70.27534648770057]
We present a physics-based inverse rendering method that learns the illumination, geometry, and materials of a scene from posed multi-view RGB images.
As a side product, our physics-based inverse rendering model also facilitates flexible and realistic material editing as well as relighting.
arXiv Detail & Related papers (2022-12-09T07:33:49Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.