StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields
- URL: http://arxiv.org/abs/2303.10598v3
- Date: Fri, 24 Mar 2023 06:57:50 GMT
- Title: StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields
- Authors: Kunhao Liu, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu,
Abdulmotaleb El Saddik, Shijian Lu, Eric Xing
- Abstract summary: StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
- Score: 52.19291190355375
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: 3D style transfer aims to render stylized novel views of a 3D scene with
multi-view consistency. However, most existing work suffers from a three-way
dilemma over accurate geometry reconstruction, high-quality stylization, and
being generalizable to arbitrary new styles. We propose StyleRF (Style Radiance
Fields), an innovative 3D style transfer technique that resolves the three-way
dilemma by performing style transformation within the feature space of a
radiance field. StyleRF employs an explicit grid of high-level features to
represent 3D scenes, with which high-fidelity geometry can be reliably restored
via volume rendering. In addition, it transforms the grid features according to
the reference style which directly leads to high-quality zero-shot style
transfer. StyleRF consists of two innovative designs. The first is
sampling-invariant content transformation that makes the transformation
invariant to the holistic statistics of the sampled 3D points and accordingly
ensures multi-view consistency. The second is deferred style transformation of
2D feature maps which is equivalent to the transformation of 3D points but
greatly reduces memory footprint without degrading multi-view consistency.
Extensive experiments show that StyleRF achieves superior 3D stylization
quality with precise geometry reconstruction and it can generalize to various
new styles in a zero-shot manner.
Related papers
- G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles [45.92812062685523]
Existing methods for 3D style transfer need extensive per-scene optimization for single or multiple styles.
In this work, we overcome the limitations of existing methods by rendering stylized novel views from a NeRF without the need for per-scene or per-style optimization.
Our findings demonstrate that this approach achieves a good visual quality comparable to that of per-scene methods.
arXiv Detail & Related papers (2024-08-24T08:04:19Z) - StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting [141.05924680451804]
StyleGaussian is a novel 3D style transfer technique.
It allows instant transfer of any image's style to a 3D scene at 10 frames per second (fps)
arXiv Detail & Related papers (2024-03-12T16:44:52Z) - ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation
Fields [26.833265073162696]
We introduce ConRF, a novel method of zero-shot stylization.
We employ a conversion process that maps the CLIP feature space to the style space of a pre-trained VGG network.
We also use a 3D volumetric representation to perform local style transfer.
arXiv Detail & Related papers (2024-02-02T23:12:16Z) - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields [96.0858117473902]
3D toonification involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture.
We propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN.
Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space.
arXiv Detail & Related papers (2023-09-08T16:17:45Z) - NeRF-Art: Text-Driven Neural Radiance Fields Stylization [38.3724634394761]
We present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt.
We show that our method is effective and robust regarding both single-view stylization quality and cross-view consistency.
arXiv Detail & Related papers (2022-12-15T18:59:58Z) - StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image
Synthesis [92.25145204543904]
StyleNeRF is a 3D-aware generative model for high-resolution image synthesis with high multi-view consistency.
It integrates the neural radiance field (NeRF) into a style-based generator.
It can synthesize high-resolution images at interactive rates while preserving 3D consistency at high quality.
arXiv Detail & Related papers (2021-10-18T02:37:01Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.