StyleSculptor: Zero-Shot Style-Controllable 3D Asset Generation with Texture-Geometry Dual Guidance
- URL: http://arxiv.org/abs/2509.13301v2
- Date: Wed, 17 Sep 2025 15:58:50 GMT
- Title: StyleSculptor: Zero-Shot Style-Controllable 3D Asset Generation with Texture-Geometry Dual Guidance
- Authors: Zefan Qu, Zhenwei Wang, Haoyuan Wang, Ke Xu, Gerhard Hancke, Rynson W. H. Lau,
- Abstract summary: StyleSculptor is a training-free approach for generating style-guided 3D assets from a content image and one or more style images.<n>It achieves style-guided 3D generation in a zero-shot manner, enabling fine-grained 3D style control.<n>In experiments, StyleSculptor outperforms existing baseline methods in producing high-fidelity 3D assets.
- Score: 50.207322685527394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating 3D assets that follow the texture and geometry style of existing ones is often desirable or even inevitable in practical applications like video gaming and virtual reality. While impressive progress has been made in generating 3D objects from text or images, creating style-controllable 3D assets remains a complex and challenging problem. In this work, we propose StyleSculptor, a novel training-free approach for generating style-guided 3D assets from a content image and one or more style images. Unlike previous works, StyleSculptor achieves style-guided 3D generation in a zero-shot manner, enabling fine-grained 3D style control that captures the texture, geometry, or both styles of user-provided style images. At the core of StyleSculptor is a novel Style Disentangled Attention (SD-Attn) module, which establishes a dynamic interaction between the input content image and style image for style-guided 3D asset generation via a cross-3D attention mechanism, enabling stable feature fusion and effective style-guided generation. To alleviate semantic content leakage, we also introduce a style-disentangled feature selection strategy within the SD-Attn module, which leverages the variance of 3D feature patches to disentangle style- and content-significant channels, allowing selective feature injection within the attention framework. With SD-Attn, the network can dynamically compute texture-, geometry-, or both-guided features to steer the 3D generation process. Built upon this, we further propose the Style Guided Control (SGC) mechanism, which enables exclusive geometry- or texture-only stylization, as well as adjustable style intensity control. Extensive experiments demonstrate that StyleSculptor outperforms existing baseline methods in producing high-fidelity 3D assets.
Related papers
- DiffStyle3D: Consistent 3D Gaussian Stylization via Attention Optimization [22.652699040654046]
3D style transfer enables the creation of visually expressive 3D content.<n>We propose DiffStyle3D, a novel diffusion-based paradigm for 3DGS style transfer.<n>We show that DiffStyle3D outperforms state-of-the-art methods, achieving higher stylization quality and visual realism.
arXiv Detail & Related papers (2026-01-27T15:41:11Z) - Improved 3D Scene Stylization via Text-Guided Generative Image Editing with Region-Based Control [47.14550252881733]
We introduce techniques that enhance the quality of 3D stylization while maintaining view consistency and providing optional region-controlled style transfer.<n>Our method achieves stylization by re-training an initial 3D representation using stylized multi-view 2D images of the source views.<n>We propose Multi-Region Importance-Weighted Sliced Wasserstein Distance Loss, allowing styles to be applied to distinct image regions using segmentation masks from off-the-shelf models.
arXiv Detail & Related papers (2025-09-04T15:01:01Z) - Style3D: Attention-guided Multi-view Style Transfer for 3D Object Generation [9.212876623996475]
Style3D is a novel approach for generating stylized 3D objects from a content image and a style image.<n>By establishing an interplay between structural and stylistic features across multiple views, our approach enables a holistic 3D stylization process.
arXiv Detail & Related papers (2024-12-04T18:59:38Z) - StyleSplat: 3D Object Style Transfer with Gaussian Splatting [0.3374875022248866]
Style transfer can enhance 3D assets with diverse artistic styles, transforming creative expression.
We introduce StyleSplat, a method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images.
We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
arXiv Detail & Related papers (2024-07-12T17:55:08Z) - DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields [96.0858117473902]
3D toonification involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture.
We propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN.
Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space.
arXiv Detail & Related papers (2023-09-08T16:17:45Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - DreamStone: Image as Stepping Stone for Text-Guided 3D Shape Generation [105.97545053660619]
We present a new text-guided 3D shape generation approach DreamStone.
It uses images as a stepping stone to bridge the gap between text and shape modalities for generating 3D shapes without requiring paired text and 3D data.
Our approach is generic, flexible, and scalable, and it can be easily integrated with various SVR models to expand the generative space and improve the generative fidelity.
arXiv Detail & Related papers (2023-03-24T03:56:23Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.