UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance
Fields for 3D Scene
- URL: http://arxiv.org/abs/2208.07059v1
- Date: Mon, 15 Aug 2022 08:17:35 GMT
- Title: UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance
Fields for 3D Scene
- Authors: Yaosen Chen and Qi Yuan and Zhiqiang Li and Yuegen Liu Wei Wang
Chaoping Xie and Xuming Wen and Qien Yu
- Abstract summary: 3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image.
Some existing stylization methods with neural radiance fields can effectively predict stylized scenes.
We propose a novel 3D scene photorealistic style transfer framework to address these issues.
- Score: 2.1033122829097484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D scenes photorealistic stylization aims to generate photorealistic images
from arbitrary novel views according to a given style image while ensuring
consistency when rendering from different viewpoints. Some existing stylization
methods with neural radiance fields can effectively predict stylized scenes by
combining the features of the style image with multi-view images to train 3D
scenes. However, these methods generate novel view images that contain
objectionable artifacts. Besides, they cannot achieve universal photorealistic
stylization for a 3D scene. Therefore, a styling image must retrain a 3D scene
representation network based on a neural radiation field. We propose a novel 3D
scene photorealistic style transfer framework to address these issues. It can
realize photorealistic 3D scene style transfer with a 2D style image. We first
pre-trained a 2D photorealistic style transfer network, which can meet the
photorealistic style transfer between any given content image and style image.
Then, we use voxel features to optimize a 3D scene and get the geometric
representation of the scene. Finally, we jointly optimize a hyper network to
realize the scene photorealistic style transfer of arbitrary style images. In
the transfer stage, we use a pre-trained 2D photorealistic network to constrain
the photorealistic style of different views and different style images in the
3D scene. The experimental results show that our method not only realizes the
3D photorealistic style transfer of arbitrary style images but also outperforms
the existing methods in terms of visual quality and consistency. Project
page:https://semchan.github.io/UPST_NeRF.
Related papers
- PNeSM: Arbitrary 3D Scene Stylization via Prompt-Based Neural Style
Mapping [16.506819625584654]
3D scene stylization refers to transform the appearance of a 3D scene to match a given style image.
Several existing methods have obtained impressive results in stylizing 3D scenes.
We propose a novel 3D scene stylization framework to transfer an arbitrary style to an arbitrary scene.
arXiv Detail & Related papers (2024-03-13T05:08:47Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - Transforming Radiance Field with Lipschitz Network for Photorealistic 3D
Scene Stylization [56.94435595045005]
LipRF is a framework for transforming appearance representation of a pre-trained NeRF with Lipschitz mapping.
We conduct extensive experiments to show the high quality and robust performance of LipRF on both photorealistic 3D stylization and object appearance editing.
arXiv Detail & Related papers (2023-03-23T13:05:57Z) - NLUT: Neural-based 3D Lookup Tables for Video Photorealistic Style
Transfer [5.442253227842167]
Video style transfer is desired to generate with a similar photorealistic style to the style image while maintaining temporal consistency.
Existing methods obtain stylized video sequences by performing frame-by-frame photorealistic style transfer, which is inefficient and does not ensure the temporal consistency of the stylized video.
We first train a neural network for generating stylized 3D LUTs on a large-scale dataset; then, when performing photorealistic style transfer for a specific video, we select a videos and style image in the video as the data source and fine-turn the neural network.
Finally, we query the 3D LUTs generated by the fine-
arXiv Detail & Related papers (2023-03-16T09:27:40Z) - SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections [49.802462165826554]
We present SceneDreamer, an unconditional generative model for unbounded 3D scenes.
Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations.
arXiv Detail & Related papers (2023-02-02T18:59:16Z) - TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting
Decomposition [39.312567993736025]
We propose TANGO, which transfers the appearance style of a given 3D shape according to a text prompt in a photorealistic manner.
We show that TANGO outperforms existing methods of text-driven 3D style transfer in terms of photorealistic quality, consistency of 3D geometry, and robustness when stylizing low-quality meshes.
arXiv Detail & Related papers (2022-10-20T13:52:18Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.