NeRF-Art: Text-Driven Neural Radiance Fields Stylization
- URL: http://arxiv.org/abs/2212.08070v1
- Date: Thu, 15 Dec 2022 18:59:58 GMT
- Title: NeRF-Art: Text-Driven Neural Radiance Fields Stylization
- Authors: Can Wang and Ruixiang Jiang and Menglei Chai and Mingming He and
Dongdong Chen and Jing Liao
- Abstract summary: We present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt.
We show that our method is effective and robust regarding both single-view stylization quality and cross-view consistency.
- Score: 38.3724634394761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a powerful representation of 3D scenes, the neural radiance field (NeRF)
enables high-quality novel view synthesis from multi-view images. Stylizing
NeRF, however, remains challenging, especially on simulating a text-guided
style with both the appearance and the geometry altered simultaneously. In this
paper, we present NeRF-Art, a text-guided NeRF stylization approach that
manipulates the style of a pre-trained NeRF model with a simple text prompt.
Unlike previous approaches that either lack sufficient geometry deformations
and texture details or require meshes to guide the stylization, our method can
shift a 3D scene to the target style characterized by desired geometry and
appearance variations without any mesh guidance. This is achieved by
introducing a novel global-local contrastive learning strategy, combined with
the directional constraint to simultaneously control both the trajectory and
the strength of the target style. Moreover, we adopt a weight regularization
method to effectively suppress cloudy artifacts and geometry noises which arise
easily when the density field is transformed during geometry stylization.
Through extensive experiments on various styles, we demonstrate that our method
is effective and robust regarding both single-view stylization quality and
cross-view consistency. The code and more results can be found in our project
page: https://cassiepython.github.io/nerfart/.
Related papers
- G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles [45.92812062685523]
Existing methods for 3D style transfer need extensive per-scene optimization for single or multiple styles.
In this work, we overcome the limitations of existing methods by rendering stylized novel views from a NeRF without the need for per-scene or per-style optimization.
Our findings demonstrate that this approach achieves a good visual quality comparable to that of per-scene methods.
arXiv Detail & Related papers (2024-08-24T08:04:19Z) - ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face Synthesis [11.463969116010183]
ArtNeRF is a novel face stylization framework derived from 3D-aware GAN.
We propose an expressive generator to synthesize stylized faces and a triple-branch discriminator module to improve style consistency.
Experiments demonstrate that ArtNeRF is versatile in generating high-quality 3D-aware cartoon faces with arbitrary styles.
arXiv Detail & Related papers (2024-04-21T16:45:35Z) - Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields [96.0858117473902]
3D toonification involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture.
We propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN.
Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space.
arXiv Detail & Related papers (2023-09-08T16:17:45Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting
Decomposition [39.312567993736025]
We propose TANGO, which transfers the appearance style of a given 3D shape according to a text prompt in a photorealistic manner.
We show that TANGO outperforms existing methods of text-driven 3D style transfer in terms of photorealistic quality, consistency of 3D geometry, and robustness when stylizing low-quality meshes.
arXiv Detail & Related papers (2022-10-20T13:52:18Z) - Learning Graph Neural Networks for Image Style Transfer [131.73237185888215]
State-of-the-art parametric and non-parametric style transfer approaches are prone to either distorted local style patterns due to global statistics alignment, or unpleasing artifacts resulting from patch mismatching.
In this paper, we study a novel semi-parametric neural style transfer framework that alleviates the deficiency of both parametric and non-parametric stylization.
arXiv Detail & Related papers (2022-07-24T07:41:31Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.