3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer
- URL: http://arxiv.org/abs/2011.13388v4
- Date: Tue, 18 May 2021 09:17:13 GMT
- Title: 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer
- Authors: Mattia Segu, Margarita Grinvald, Roland Siegwart, Federico Tombari
- Abstract summary: We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
- Score: 66.48720190245616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferring the style from one image onto another is a popular and widely
studied task in computer vision. Yet, style transfer in the 3D setting remains
a largely unexplored problem. To our knowledge, we propose the first
learning-based approach for style transfer between 3D objects based on
disentangled content and style representations. The proposed method can
synthesize new 3D shapes both in the form of point clouds and meshes, combining
the content and style of a source and target 3D model to generate a novel shape
that resembles in style the target while retaining the source content.
Furthermore, we extend our technique to implicitly learn the multimodal style
distribution of the chosen domains. By sampling style codes from the learned
distributions, we increase the variety of styles that our model can confer to
an input shape. Experimental results validate the effectiveness of the proposed
3D style transfer method on a number of benchmarks. The implementation of our
framework will be released upon acceptance.
Related papers
- StyleSplat: 3D Object Style Transfer with Gaussian Splatting [0.3374875022248866]
Style transfer can enhance 3D assets with diverse artistic styles, transforming creative expression.
We introduce StyleSplat, a method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images.
We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
arXiv Detail & Related papers (2024-07-12T17:55:08Z) - Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - Dream-in-Style: Text-to-3D Generation using Stylized Score Distillation [14.079043195485601]
We present a method to generate 3D objects in styles.
Our method takes a text prompt and a style reference image as input and reconstructs a neural radiance field to synthesize a 3D model.
arXiv Detail & Related papers (2024-06-05T16:27:34Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer [41.388313754081544]
We propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler)
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D scene and generalize to novel scenes without re-train our model.
We conduct extensive experiments to show the effectiveness of our model on text-guided 3D scene style transfer.
arXiv Detail & Related papers (2023-05-25T05:30:13Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.