Adjustable Visual Appearance for Generalizable Novel View Synthesis
- URL: http://arxiv.org/abs/2306.01344v3
- Date: Fri, 26 Jan 2024 16:19:31 GMT
- Title: Adjustable Visual Appearance for Generalizable Novel View Synthesis
- Authors: Josef Bengtson, David Nilsson, Che-Tsung Lin, Marcel B\"usching and
Fredrik Kahl
- Abstract summary: We present a generalizable novel view synthesis method.
It enables modifying the visual appearance of an observed scene so rendered views match a target weather or lighting condition.
Our method is based on a pretrained generalizable transformer architecture and is fine-tuned on synthetically generated scenes.
- Score: 12.901033240320725
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a generalizable novel view synthesis method which enables
modifying the visual appearance of an observed scene so rendered views match a
target weather or lighting condition without any scene specific training or
access to reference views at the target condition. Our method is based on a
pretrained generalizable transformer architecture and is fine-tuned on
synthetically generated scenes under different appearance conditions. This
allows for rendering novel views in a consistent manner for 3D scenes that were
not included in the training set, along with the ability to (i) modify their
appearance to match the target condition and (ii) smoothly interpolate between
different conditions. Experiments on real and synthetic scenes show that our
method is able to generate 3D consistent renderings while making realistic
appearance changes, including qualitative and quantitative comparisons. Please
refer to our project page for video results: https://ava-nvs.github.io/
Related papers
- CLIP-Layout: Style-Consistent Indoor Scene Synthesis with Semantic
Furniture Embedding [17.053844262654223]
Indoor scene synthesis involves automatically picking and placing furniture appropriately on a floor plan.
This paper introduces an auto-regressive scene model which can output instance-level predictions.
Our model achieves SOTA results in scene synthesis and improves auto-completion metrics by over 50%.
arXiv Detail & Related papers (2023-03-07T00:26:02Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Towards 3D Scene Understanding by Referring Synthetic Models [65.74211112607315]
Methods typically alleviate on-extensive annotations on real scene scans.
We explore how synthetic models rely on real scene categories of synthetic features to a unified feature space.
Experiments show that our method achieves the average mAP of 46.08% on the ScanNet S3DIS dataset and 55.49% by learning datasets.
arXiv Detail & Related papers (2022-03-20T13:06:15Z) - Scene Representation Transformer: Geometry-Free Novel View Synthesis
Through Set-Latent Scene Representations [48.05445941939446]
A classical problem in computer vision is to infer a 3D scene representation from few images that can be used to render novel views at interactive rates.
We propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area.
We show that this method outperforms recent baselines in terms of PSNR and speed on synthetic datasets.
arXiv Detail & Related papers (2021-11-25T16:18:56Z) - Appearance Editing with Free-viewpoint Neural Rendering [6.3417651529192005]
We present a framework for simultaneous view synthesis and appearance editing of a scene from multi-view images.
Our approach explicitly disentangles the appearance and learns a lighting representation that is independent of it.
We show results of editing the appearance of a real scene, demonstrating that our approach produces plausible appearance editing.
arXiv Detail & Related papers (2021-10-14T19:14:05Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.