StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions
- URL: http://arxiv.org/abs/2112.01530v1
- Date: Thu, 2 Dec 2021 18:59:59 GMT
- Title: StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions
- Authors: Lukas H\"ollein, Justin Johnson, Matthias Nie{\ss}ner
- Abstract summary: We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
- Score: 11.153966202832933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We apply style transfer on mesh reconstructions of indoor scenes. This
enables VR applications like experiencing 3D environments painted in the style
of a favorite artist. Style transfer typically operates on 2D images, making
stylization of a mesh challenging. When optimized over a variety of poses,
stylization patterns become stretched out and inconsistent in size. On the
other hand, model-based 3D style transfer methods exist that allow stylization
from a sparse set of images, but they require a network at inference time. To
this end, we optimize an explicit texture for the reconstructed mesh of a scene
and stylize it jointly from all available input images. Our depth- and
angle-aware optimization leverages surface normal and depth data of the
underlying mesh to create a uniform and consistent stylization for the whole
scene. Our experiments show that our method creates sharp and detailed results
for the complete scene without view-dependent artifacts. Through extensive
ablation studies, we show that the proposed 3D awareness enables style transfer
to be applied to the 3D domain of a mesh. Our method can be used to render a
stylized mesh in real-time with traditional rendering pipelines.
Related papers
- StyleSplat: 3D Object Style Transfer with Gaussian Splatting [0.3374875022248866]
Style transfer can enhance 3D assets with diverse artistic styles, transforming creative expression.
We introduce StyleSplat, a method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images.
We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
arXiv Detail & Related papers (2024-07-12T17:55:08Z) - Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - StyleCity: Large-Scale 3D Urban Scenes Stylization [16.017767577678253]
StyleCity is a vision-and-text-driven texture stylization system for large-scale urban scenes.
StyleCity stylizes a 3D textured mesh of a large-scale urban scene in a semantics-aware fashion.
arXiv Detail & Related papers (2024-04-16T15:58:49Z) - LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example [5.999050119438177]
We propose a method that can produce a highly stylized 3D face model with desired topology.
Our methods train a surface deformation network with 3DMM and translate its domain to the target style using a differentiable meshes and directional CLIP losses.
The network achieves stylization of the 3D face mesh by mimicking the style of the target using a differentiable meshes and directional CLIP losses.
arXiv Detail & Related papers (2024-03-22T14:20:54Z) - Neural 3D Strokes: Creating Stylized 3D Scenes with Vectorized 3D
Strokes [20.340259111585873]
We present Neural 3D Strokes, a novel technique to generate stylized images of a 3D scene at arbitrary novel views from multi-view 2D images.
Our approach draws inspiration from image-to-painting methods, simulating the progressive painting process of human artwork with vector strokes.
arXiv Detail & Related papers (2023-11-27T09:02:21Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - S2RF: Semantically Stylized Radiance Fields [1.243080988483032]
We present our method for transferring style from any arbitrary image(s) to object(s) within a 3D scene.
Our primary objective is to offer more control in 3D scene stylization, facilitating the creation of customizable and stylized scene images from arbitrary viewpoints.
arXiv Detail & Related papers (2023-09-03T19:32:49Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - 3DStyleNet: Creating 3D Shapes with Geometric and Texture Style
Variations [81.45521258652734]
We propose a method to create plausible geometric and texture style variations of 3D objects.
Our method can create many novel stylized shapes, resulting in effortless 3D content creation and style-ware data augmentation.
arXiv Detail & Related papers (2021-08-30T02:28:31Z) - Learning to Stylize Novel Views [82.24095446809946]
We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
arXiv Detail & Related papers (2021-05-27T23:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.