Neural 3D Strokes: Creating Stylized 3D Scenes with Vectorized 3D
Strokes
- URL: http://arxiv.org/abs/2311.15637v2
- Date: Tue, 12 Mar 2024 09:34:53 GMT
- Title: Neural 3D Strokes: Creating Stylized 3D Scenes with Vectorized 3D
Strokes
- Authors: Hao-Bin Duan, Miao Wang, Yan-Xun Li and Yong-Liang Yang
- Abstract summary: We present Neural 3D Strokes, a novel technique to generate stylized images of a 3D scene at arbitrary novel views from multi-view 2D images.
Our approach draws inspiration from image-to-painting methods, simulating the progressive painting process of human artwork with vector strokes.
- Score: 20.340259111585873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Neural 3D Strokes, a novel technique to generate stylized images
of a 3D scene at arbitrary novel views from multi-view 2D images. Different
from existing methods which apply stylization to trained neural radiance fields
at the voxel level, our approach draws inspiration from image-to-painting
methods, simulating the progressive painting process of human artwork with
vector strokes. We develop a palette of stylized 3D strokes from basic
primitives and splines, and consider the 3D scene stylization task as a
multi-view reconstruction process based on these 3D stroke primitives. Instead
of directly searching for the parameters of these 3D strokes, which would be
too costly, we introduce a differentiable renderer that allows optimizing
stroke parameters using gradient descent, and propose a training scheme to
alleviate the vanishing gradient issue. The extensive evaluation demonstrates
that our approach effectively synthesizes 3D scenes with significant geometric
and aesthetic stylization while maintaining a consistent appearance across
different views. Our method can be further integrated with style loss and
image-text contrastive models to extend its applications, including color
transfer and text-driven 3D scene drawing. Results and code are available at
http://buaavrcg.github.io/Neural3DStrokes.
Related papers
- Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Rendering [17.918603435615335]
3D sketches are widely used for visually representing the 3D shape and structure of objects or scenes.
We propose Diff3DS, a novel differentiable framework for generating view-consistent 3D sketch.
Our framework bridges the domains of 3D sketch and customized image, achieving end-toend optimization of 3D sketch.
arXiv Detail & Related papers (2024-05-24T07:48:14Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - Control3D: Towards Controllable Text-to-3D Generation [107.81136630589263]
We present a text-to-3D generation conditioning on the additional hand-drawn sketch, namely Control3D.
A 2D conditioned diffusion model (ControlNet) is remoulded to guide the learning of 3D scene parameterized as NeRF.
We exploit a pre-trained differentiable photo-to-sketch model to directly estimate the sketch of the rendered image over synthetic 3D scene.
arXiv Detail & Related papers (2023-11-09T15:50:32Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.