PNeSM: Arbitrary 3D Scene Stylization via Prompt-Based Neural Style
Mapping
- URL: http://arxiv.org/abs/2403.08252v1
- Date: Wed, 13 Mar 2024 05:08:47 GMT
- Title: PNeSM: Arbitrary 3D Scene Stylization via Prompt-Based Neural Style
Mapping
- Authors: Jiafu Chen, Wei Xing, Jiakai Sun, Tianyi Chu, Yiling Huang, Boyan Ji,
Lei Zhao, Huaizhong Lin, Haibo Chen, Zhizhong Wang
- Abstract summary: 3D scene stylization refers to transform the appearance of a 3D scene to match a given style image.
Several existing methods have obtained impressive results in stylizing 3D scenes.
We propose a novel 3D scene stylization framework to transfer an arbitrary style to an arbitrary scene.
- Score: 16.506819625584654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D scene stylization refers to transform the appearance of a 3D scene to
match a given style image, ensuring that images rendered from different
viewpoints exhibit the same style as the given style image, while maintaining
the 3D consistency of the stylized scene. Several existing methods have
obtained impressive results in stylizing 3D scenes. However, the models
proposed by these methods need to be re-trained when applied to a new scene. In
other words, their models are coupled with a specific scene and cannot adapt to
arbitrary other scenes. To address this issue, we propose a novel 3D scene
stylization framework to transfer an arbitrary style to an arbitrary scene,
without any style-related or scene-related re-training. Concretely, we first
map the appearance of the 3D scene into a 2D style pattern space, which
realizes complete disentanglement of the geometry and appearance of the 3D
scene and makes our model be generalized to arbitrary 3D scenes. Then we
stylize the appearance of the 3D scene in the 2D style pattern space via a
prompt-based 2D stylization algorithm. Experimental results demonstrate that
our proposed framework is superior to SOTA methods in both visual quality and
generalization.
Related papers
- Sketch2Scene: Automatic Generation of Interactive 3D Game Scenes from User's Casual Sketches [50.51643519253066]
3D Content Generation is at the heart of many computer graphics applications, including video gaming, film-making, virtual and augmented reality, etc.
This paper proposes a novel deep-learning based approach for automatically generating interactive and playable 3D game scenes.
arXiv Detail & Related papers (2024-08-08T16:27:37Z) - StyleSplat: 3D Object Style Transfer with Gaussian Splatting [0.3374875022248866]
Style transfer can enhance 3D assets with diverse artistic styles, transforming creative expression.
We introduce StyleSplat, a method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images.
We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
arXiv Detail & Related papers (2024-07-12T17:55:08Z) - 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation [51.64796781728106]
We propose a generative refinement network to synthesize new contents with higher quality by exploiting the natural image prior to 2D diffusion model and the global 3D information of the current scene.
Our approach supports wide variety of scene generation and arbitrary camera trajectories with improved visual quality and 3D consistency.
arXiv Detail & Related papers (2024-03-14T14:31:22Z) - S2RF: Semantically Stylized Radiance Fields [1.243080988483032]
We present our method for transferring style from any arbitrary image(s) to object(s) within a 3D scene.
Our primary objective is to offer more control in 3D scene stylization, facilitating the creation of customizable and stylized scene images from arbitrary viewpoints.
arXiv Detail & Related papers (2023-09-03T19:32:49Z) - CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer [41.388313754081544]
We propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler)
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D scene and generalize to novel scenes without re-train our model.
We conduct extensive experiments to show the effectiveness of our model on text-guided 3D scene style transfer.
arXiv Detail & Related papers (2023-05-25T05:30:13Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - SceneDreamer: Unbounded 3D Scene Generation from 2D Image Collections [49.802462165826554]
We present SceneDreamer, an unconditional generative model for unbounded 3D scenes.
Our framework is learned from in-the-wild 2D image collections only, without any 3D annotations.
arXiv Detail & Related papers (2023-02-02T18:59:16Z) - UPST-NeRF: Universal Photorealistic Style Transfer of Neural Radiance
Fields for 3D Scene [2.1033122829097484]
3D scenes photorealistic stylization aims to generate photorealistic images from arbitrary novel views according to a given style image.
Some existing stylization methods with neural radiance fields can effectively predict stylized scenes.
We propose a novel 3D scene photorealistic style transfer framework to address these issues.
arXiv Detail & Related papers (2022-08-15T08:17:35Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - Learning to Stylize Novel Views [82.24095446809946]
We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
arXiv Detail & Related papers (2021-05-27T23:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.