CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer
- URL: http://arxiv.org/abs/2305.15732v2
- Date: Fri, 26 May 2023 03:23:20 GMT
- Title: CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer
- Authors: Ming Gao, YanWu Xu, Yang Zhao, Tingbo Hou, Chenkai Zhao, Mingming Gong
- Abstract summary: We propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler)
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D scene and generalize to novel scenes without re-train our model.
We conduct extensive experiments to show the effectiveness of our model on text-guided 3D scene style transfer.
- Score: 41.388313754081544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel language-guided 3D arbitrary neural style
transfer method (CLIP3Dstyler). We aim at stylizing any 3D scene with an
arbitrary style from a text description, and synthesizing the novel stylized
view, which is more flexible than the image-conditioned style transfer.
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D
scene and generalize to novel scenes without re-train our model. A
straightforward solution is to combine previous image-conditioned 3D style
transfer and text-conditioned 2D style transfer \bigskip methods. However, such
a solution cannot achieve our goal due to two main challenges. First, there is
no multi-modal model matching point clouds and language at different feature
scales (low-level, high-level). Second, we observe a style mixing issue when we
stylize the content with different style conditions from text prompts. To
address the first issue, we propose a 3D stylization framework to match the
point cloud features with text features in local and global views. For the
second issue, we propose an improved directional divergence loss to make
arbitrary text styles more distinguishable as a complement to our framework. We
conduct extensive experiments to show the effectiveness of our model on
text-guided 3D scene style transfer.
Related papers
- StyleSplat: 3D Object Style Transfer with Gaussian Splatting [0.3374875022248866]
Style transfer can enhance 3D assets with diverse artistic styles, transforming creative expression.
We introduce StyleSplat, a method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images.
We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
arXiv Detail & Related papers (2024-07-12T17:55:08Z) - PNeSM: Arbitrary 3D Scene Stylization via Prompt-Based Neural Style
Mapping [16.506819625584654]
3D scene stylization refers to transform the appearance of a 3D scene to match a given style image.
Several existing methods have obtained impressive results in stylizing 3D scenes.
We propose a novel 3D scene stylization framework to transfer an arbitrary style to an arbitrary scene.
arXiv Detail & Related papers (2024-03-13T05:08:47Z) - TeMO: Towards Text-Driven 3D Stylization for Multi-Object Meshes [67.5351491691866]
We present a novel framework, dubbed TeMO, to parse multi-object 3D scenes and edit their styles.
Our method can synthesize high-quality stylized content and outperform the existing methods over a wide range of multi-object 3D meshes.
arXiv Detail & Related papers (2023-12-07T12:10:05Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - Learning to Stylize Novel Views [82.24095446809946]
We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
arXiv Detail & Related papers (2021-05-27T23:58:18Z) - Exemplar-Based 3D Portrait Stylization [23.585334925548064]
We present the first framework for one-shot 3D portrait style transfer.
It can generate 3D face models with both the geometry exaggerated and the texture stylized.
Our method achieves robustly good results on different artistic styles and outperforms existing methods.
arXiv Detail & Related papers (2021-04-29T17:59:54Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.