Appearance Editing with Free-viewpoint Neural Rendering
- URL: http://arxiv.org/abs/2110.07674v1
- Date: Thu, 14 Oct 2021 19:14:05 GMT
- Title: Appearance Editing with Free-viewpoint Neural Rendering
- Authors: Pulkit Gera, Aakash KT, Dhawal Sirikonda, Parikshit Sakurikar, P.J.
Narayanan
- Abstract summary: We present a framework for simultaneous view synthesis and appearance editing of a scene from multi-view images.
Our approach explicitly disentangles the appearance and learns a lighting representation that is independent of it.
We show results of editing the appearance of a real scene, demonstrating that our approach produces plausible appearance editing.
- Score: 6.3417651529192005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a neural rendering framework for simultaneous view synthesis and
appearance editing of a scene from multi-view images captured under known
environment illumination. Existing approaches either achieve view synthesis
alone or view synthesis along with relighting, without direct control over the
scene's appearance. Our approach explicitly disentangles the appearance and
learns a lighting representation that is independent of it. Specifically, we
independently estimate the BRDF and use it to learn a lighting-only
representation of the scene. Such disentanglement allows our approach to
generalize to arbitrary changes in appearance while performing view synthesis.
We show results of editing the appearance of a real scene, demonstrating that
our approach produces plausible appearance editing. The performance of our view
synthesis approach is demonstrated to be at par with state-of-the-art
approaches on both real and synthetic data.
Related papers
- Adjustable Visual Appearance for Generalizable Novel View Synthesis [12.901033240320725]
We present a generalizable novel view synthesis method.
It enables modifying the visual appearance of an observed scene so rendered views match a target weather or lighting condition.
Our method is based on a pretrained generalizable transformer architecture and is fine-tuned on synthetically generated scenes.
arXiv Detail & Related papers (2023-06-02T08:17:04Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - Neural Light Transport for Relighting and View Synthesis [70.39907425114302]
Light transport (LT) of a scene describes how it appears under different lighting and viewing directions.
We propose a semi-parametric approach to learn a neural representation of LT embedded in a texture atlas of known geometric properties.
We show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition.
arXiv Detail & Related papers (2020-08-09T20:13:15Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.