Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D
Light Field
- URL: http://arxiv.org/abs/2310.14642v1
- Date: Mon, 23 Oct 2023 07:29:51 GMT
- Title: Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D
Light Field
- Authors: Zhong Li, Liangchen Song, Zhang Chen, Xiangyu Du, Lele Chen, Junsong
Yuan, Yi Xu
- Abstract summary: We propose an analysis-synthesis approach called Relit-NeuLF.
We first parameterize each ray in a 4D coordinate system, enabling efficient learning and inference.
Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data.
- Score: 69.90548694719683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we address the problem of simultaneous relighting and novel
view synthesis of a complex scene from multi-view images with a limited number
of light sources. We propose an analysis-synthesis approach called Relit-NeuLF.
Following the recent neural 4D light field network (NeuLF), Relit-NeuLF first
leverages a two-plane light field representation to parameterize each ray in a
4D coordinate system, enabling efficient learning and inference. Then, we
recover the spatially-varying bidirectional reflectance distribution function
(SVBRDF) of a 3D scene in a self-supervised manner. A DecomposeNet learns to
map each ray to its SVBRDF components: albedo, normal, and roughness. Based on
the decomposed BRDF components and conditioning light directions, a RenderNet
learns to synthesize the color of the ray. To self-supervise the SVBRDF
decomposition, we encourage the predicted ray color to be close to the
physically-based rendering result using the microfacet model. Comprehensive
experiments demonstrate that the proposed method is efficient and effective on
both synthetic data and real-world human face data, and outperforms the
state-of-the-art results. We publicly released our code on GitHub. You can find
it here: https://github.com/oppo-us-research/RelitNeuLF
Related papers
- Free3D: Consistent Novel View Synthesis without 3D Representation [63.931920010054064]
Free3D is a simple accurate method for monocular open-set novel view synthesis (NVS)
Compared to other works that took a similar approach, we obtain significant improvements without resorting to an explicit 3D representation.
arXiv Detail & Related papers (2023-12-07T18:59:18Z) - Re-Nerfing: Improving Novel View Synthesis through Novel View Synthesis [80.3686833921072]
Recent neural rendering and reconstruction techniques, such as NeRFs or Gaussian Splatting, have shown remarkable novel view synthesis capabilities.
With fewer images available, these methods start to fail since they can no longer correctly triangulate the underlying 3D geometry.
We propose Re-Nerfing, a simple and general add-on approach that leverages novel view synthesis itself to tackle this problem.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - Learning Generalizable Light Field Networks from Few Images [7.672380267651058]
We present a new strategy for few-shot novel view synthesis based on a neural light field representation.
We show that our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition.
arXiv Detail & Related papers (2022-07-24T14:47:11Z) - Generalizable Patch-Based Neural Rendering [46.41746536545268]
We propose a new paradigm for learning models that can synthesize novel views of unseen scenes.
Our method is capable of predicting the color of a target ray in a novel scene directly, just from a collection of patches sampled from the scene.
We show that our approach outperforms the state-of-the-art on novel view synthesis of unseen scenes even when being trained with considerably less data than prior work.
arXiv Detail & Related papers (2022-07-21T17:57:04Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering [60.02806355570514]
Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
arXiv Detail & Related papers (2021-06-04T17:54:49Z) - NeLF: Practical Novel View Synthesis with Neural Light Field [93.41020940730915]
We present a practical and robust deep learning solution for the novel view synthesis of complex scenes.
In our approach, a continuous scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color.
Our method achieves state-of-the-art novel view synthesis results while maintaining an interactive frame rate.
arXiv Detail & Related papers (2021-05-15T01:20:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.