Stylos: Multi-View 3D Stylization with Single-Forward Gaussian Splatting
- URL: http://arxiv.org/abs/2509.26455v1
- Date: Tue, 30 Sep 2025 16:09:13 GMT
- Title: Stylos: Multi-View 3D Stylization with Single-Forward Gaussian Splatting
- Authors: Hanzhou Liu, Jia Huang, Mi Lu, Srikanth Saripalli, Peng Jiang,
- Abstract summary: We present Stylos, a single-forward 3D Gaussian framework for 3D style transfer that operates on unposed content.<n>Stylos synthesizes a stylized 3D scene without per-scene optimization or precomputed poses.<n> Experiments across multiple datasets demonstrate that Stylos delivers high-quality zero-shot stylization.
- Score: 11.720515089961339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Stylos, a single-forward 3D Gaussian framework for 3D style transfer that operates on unposed content, from a single image to a multi-view collection, conditioned on a separate reference style image. Stylos synthesizes a stylized 3D Gaussian scene without per-scene optimization or precomputed poses, achieving geometry-aware, view-consistent stylization that generalizes to unseen categories, scenes, and styles. At its core, Stylos adopts a Transformer backbone with two pathways: geometry predictions retain self-attention to preserve geometric fidelity, while style is injected via global cross-attention to enforce visual consistency across views. With the addition of a voxel-based 3D style loss that aligns aggregated scene features to style statistics, Stylos enforces view-consistent stylization while preserving geometry. Experiments across multiple datasets demonstrate that Stylos delivers high-quality zero-shot stylization, highlighting the effectiveness of global style-content coupling, the proposed 3D style loss, and the scalability of our framework from single view to large-scale multi-view settings.
Related papers
- DiffStyle3D: Consistent 3D Gaussian Stylization via Attention Optimization [22.652699040654046]
3D style transfer enables the creation of visually expressive 3D content.<n>We propose DiffStyle3D, a novel diffusion-based paradigm for 3DGS style transfer.<n>We show that DiffStyle3D outperforms state-of-the-art methods, achieving higher stylization quality and visual realism.
arXiv Detail & Related papers (2026-01-27T15:41:11Z) - StyleSculptor: Zero-Shot Style-Controllable 3D Asset Generation with Texture-Geometry Dual Guidance [50.207322685527394]
StyleSculptor is a training-free approach for generating style-guided 3D assets from a content image and one or more style images.<n>It achieves style-guided 3D generation in a zero-shot manner, enabling fine-grained 3D style control.<n>In experiments, StyleSculptor outperforms existing baseline methods in producing high-fidelity 3D assets.
arXiv Detail & Related papers (2025-09-16T17:55:20Z) - SSGaussian: Semantic-Aware and Structure-Preserving 3D Style Transfer [57.723850794113055]
We propose a novel 3D style transfer pipeline that integrates prior knowledge from pretrained 2D diffusion models.<n>Our pipeline consists of two key stages: First, we leverage diffusion priors to generate stylized renderings of key viewpoints.<n>The second is instance-level style transfer, which effectively leverages instance-level consistency across stylized key views and transfers it onto the 3D representation.
arXiv Detail & Related papers (2025-09-04T16:40:44Z) - Improved 3D Scene Stylization via Text-Guided Generative Image Editing with Region-Based Control [47.14550252881733]
We introduce techniques that enhance the quality of 3D stylization while maintaining view consistency and providing optional region-controlled style transfer.<n>Our method achieves stylization by re-training an initial 3D representation using stylized multi-view 2D images of the source views.<n>We propose Multi-Region Importance-Weighted Sliced Wasserstein Distance Loss, allowing styles to be applied to distinct image regions using segmentation masks from off-the-shelf models.
arXiv Detail & Related papers (2025-09-04T15:01:01Z) - Multi-StyleGS: Stylizing Gaussian Splatting with Multiple Styles [45.648346391757336]
3D Gaussian Splatting(GS) has emerged as a promising and efficient method for realistic 3D scene modeling.<n>We introduce a novel 3D GS stylization solution termed Multi-StyleGS to tackle these challenges.
arXiv Detail & Related papers (2025-06-07T15:54:34Z) - ReStyle3D: Scene-Level Appearance Transfer with Semantic Correspondences [33.06053818091165]
ReStyle3D is a framework for scene-level appearance transfer from a single style image to a real-world scene represented by multiple views.<n>It combines explicit semantic correspondences with multi-view consistency to achieve precise and coherent stylization.<n>Our code, pretrained models, and dataset will be publicly released to support new applications in interior design, virtual staging, and 3D-consistent stylization.
arXiv Detail & Related papers (2025-02-14T18:54:21Z) - Style3D: Attention-guided Multi-view Style Transfer for 3D Object Generation [9.212876623996475]
Style3D is a novel approach for generating stylized 3D objects from a content image and a style image.<n>By establishing an interplay between structural and stylistic features across multiple views, our approach enables a holistic 3D stylization process.
arXiv Detail & Related papers (2024-12-04T18:59:38Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - Learning to Stylize Novel Views [82.24095446809946]
We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
arXiv Detail & Related papers (2021-05-27T23:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.