ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation
Fields
- URL: http://arxiv.org/abs/2402.01950v2
- Date: Wed, 6 Mar 2024 20:19:44 GMT
- Title: ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation
Fields
- Authors: Xingyu Miao, Yang Bai, Haoran Duan, Fan Wan, Yawen Huang, Yang Long,
Yefeng Zheng
- Abstract summary: We introduce ConRF, a novel method of zero-shot stylization.
We employ a conversion process that maps the CLIP feature space to the style space of a pre-trained VGG network.
We also use a 3D volumetric representation to perform local style transfer.
- Score: 26.833265073162696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the existing works on arbitrary 3D NeRF style transfer required
retraining on each single style condition. This work aims to achieve zero-shot
controlled stylization in 3D scenes utilizing text or visual input as
conditioning factors. We introduce ConRF, a novel method of zero-shot
stylization. Specifically, due to the ambiguity of CLIP features, we employ a
conversion process that maps the CLIP feature space to the style space of a
pre-trained VGG network and then refine the CLIP multi-modal knowledge into a
style transfer neural radiation field. Additionally, we use a 3D volumetric
representation to perform local style transfer. By combining these operations,
ConRF offers the capability to utilize either text or images as references,
resulting in the generation of sequences with novel views enhanced by global or
local stylization. Our experiment demonstrates that ConRF outperforms other
existing methods for 3D scene and single-text stylization in terms of visual
quality.
Related papers
- G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles [45.92812062685523]
Existing methods for 3D style transfer need extensive per-scene optimization for single or multiple styles.
In this work, we overcome the limitations of existing methods by rendering stylized novel views from a NeRF without the need for per-scene or per-style optimization.
Our findings demonstrate that this approach achieves a good visual quality comparable to that of per-scene methods.
arXiv Detail & Related papers (2024-08-24T08:04:19Z) - Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - CoARF: Controllable 3D Artistic Style Transfer for Radiance Fields [7.651502365257349]
We introduce Controllable Artistic Radiance Fields (CoARF), a novel algorithm for controllable 3D scene stylization.
CoARF provides user-specified controllability of style transfer and superior style transfer quality with more precise feature matching.
arXiv Detail & Related papers (2024-04-23T12:22:32Z) - StylizedGS: Controllable Stylization for 3D Gaussian Splatting [53.0225128090909]
StylizedGS is an efficient 3D neural style transfer framework with adaptable control over perceptual factors.
Our method achieves high-quality stylization results characterized by faithful brushstrokes and geometric consistency with flexible controls.
arXiv Detail & Related papers (2024-04-08T06:32:11Z) - S-DyRF: Reference-Based Stylized Radiance Fields for Dynamic Scenes [58.05447927353328]
Current 3D stylization methods often assume static scenes, which violates the dynamic nature of our real world.
We present S-DyRF, a reference-based temporal stylization method for dynamic neural fields.
Experiments on both synthetic and real-world datasets demonstrate that our method yields plausible stylized results.
arXiv Detail & Related papers (2024-03-10T13:04:01Z) - Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.