Learning to Stylize Novel Views
- URL: http://arxiv.org/abs/2105.13509v1
- Date: Thu, 27 May 2021 23:58:18 GMT
- Title: Learning to Stylize Novel Views
- Authors: Hsin-Ping Huang, Hung-Yu Tseng, Saurabh Saini, Maneesh Singh,
Ming-Hsuan Yang
- Abstract summary: We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
- Score: 82.24095446809946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle a 3D scene stylization problem - generating stylized images of a
scene from arbitrary novel views given a set of images of the same scene and a
reference image of the desired style as inputs. Direct solution of combining
novel view synthesis and stylization approaches lead to results that are blurry
or not consistent across different views. We propose a point cloud-based method
for consistent 3D scene stylization. First, we construct the point cloud by
back-projecting the image features to the 3D space. Second, we develop point
cloud aggregation modules to gather the style information of the 3D scene, and
then modulate the features in the point cloud with a linear transformation
matrix. Finally, we project the transformed features to 2D space to obtain the
novel views. Experimental results on two diverse datasets of real-world scenes
validate that our method generates consistent stylized novel view synthesis
results against other alternative approaches.
Related papers
- PNeSM: Arbitrary 3D Scene Stylization via Prompt-Based Neural Style
Mapping [16.506819625584654]
3D scene stylization refers to transform the appearance of a 3D scene to match a given style image.
Several existing methods have obtained impressive results in stylizing 3D scenes.
We propose a novel 3D scene stylization framework to transfer an arbitrary style to an arbitrary scene.
arXiv Detail & Related papers (2024-03-13T05:08:47Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - S2RF: Semantically Stylized Radiance Fields [1.243080988483032]
We present our method for transferring style from any arbitrary image(s) to object(s) within a 3D scene.
Our primary objective is to offer more control in 3D scene stylization, facilitating the creation of customizable and stylized scene images from arbitrary viewpoints.
arXiv Detail & Related papers (2023-09-03T19:32:49Z) - CLIP3Dstyler: Language Guided 3D Arbitrary Neural Style Transfer [41.388313754081544]
We propose a novel language-guided 3D arbitrary neural style transfer method (CLIP3Dstyler)
Compared with the previous 2D method CLIPStyler, we are able to stylize a 3D scene and generalize to novel scenes without re-train our model.
We conduct extensive experiments to show the effectiveness of our model on text-guided 3D scene style transfer.
arXiv Detail & Related papers (2023-05-25T05:30:13Z) - Instant Photorealistic Neural Radiance Fields Stylization [1.039189397779466]
We present Instant Neural Radiance Fields Stylization, a novel approach for multi-view image stylization for the 3D scene.
Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding.
Our method can generate stylized novel views with a consistent appearance at various view angles in less than 10 minutes on modern GPU hardware.
arXiv Detail & Related papers (2023-03-29T17:53:20Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - Unified Implicit Neural Stylization [80.59831861186227]
This work explores a new intriguing direction: training a stylized implicit representation.
We conduct a pilot study on a variety of implicit functions, including 2D coordinate-based representation, neural radiance field, and signed distance function.
Our solution is a Unified Implicit Neural Stylization framework, dubbed INS.
arXiv Detail & Related papers (2022-04-05T02:37:39Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z) - 3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer [66.48720190245616]
We propose a learning-based approach for style transfer between 3D objects.
The proposed method can synthesize new 3D shapes both in the form of point clouds and meshes.
We extend our technique to implicitly learn the multimodal style distribution of the chosen domains.
arXiv Detail & Related papers (2020-11-26T16:59:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.