Gaussian Splatting in Style
- URL: http://arxiv.org/abs/2403.08498v1
- Date: Wed, 13 Mar 2024 13:06:31 GMT
- Title: Gaussian Splatting in Style
- Authors: Abhishek Saroha, Mariia Gladkova, Cecilia Curreli, Tarun Yenamandra,
Daniel Cremers
- Abstract summary: We propose a novel architecture trained on a collection of style images, that at test time produces high quality stylized novel views.
We show our methods achieve state-of-the-art performance with superior visual quality on various indoor and outdoor real-world data.
- Score: 35.376015119962354
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Scene stylization extends the work of neural style transfer to three spatial
dimensions. A vital challenge in this problem is to maintain the uniformity of
the stylized appearance across a multi-view setting. A vast majority of the
previous works achieve this by optimizing the scene with a specific style
image. In contrast, we propose a novel architecture trained on a collection of
style images, that at test time produces high quality stylized novel views. Our
work builds up on the framework of 3D Gaussian splatting. For a given scene, we
take the pretrained Gaussians and process them using a multi resolution hash
grid and a tiny MLP to obtain the conditional stylised views. The explicit
nature of 3D Gaussians give us inherent advantages over NeRF-based methods
including geometric consistency, along with having a fast training and
rendering regime. This enables our method to be useful for vast practical use
cases such as in augmented or virtual reality applications. Through our
experiments, we show our methods achieve state-of-the-art performance with
superior visual quality on various indoor and outdoor real-world data.
Related papers
- StyleSplat: 3D Object Style Transfer with Gaussian Splatting [0.3374875022248866]
Style transfer can enhance 3D assets with diverse artistic styles, transforming creative expression.
We introduce StyleSplat, a method for stylizing 3D objects in scenes represented by 3D Gaussians from reference style images.
We demonstrate its effectiveness across various 3D scenes and styles, showcasing enhanced control and customization in 3D creation.
arXiv Detail & Related papers (2024-07-12T17:55:08Z) - Reference-based Controllable Scene Stylization with Gaussian Splatting [30.321151430263946]
Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area.
We propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis.
arXiv Detail & Related papers (2024-07-09T20:30:29Z) - Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with Pre-trained Image Encoder [57.574544285878794]
Ada-Adapter is a novel framework for few-shot style personalization of diffusion models.
Our method enables efficient zero-shot style transfer utilizing a single reference image.
We demonstrate the effectiveness of our approach on various artistic styles, including flat art, 3D rendering, and logo design.
arXiv Detail & Related papers (2024-07-08T02:00:17Z) - SWAG: Splatting in the Wild images with Appearance-conditioned Gaussians [2.2369578015657954]
Implicit neural representation methods have shown impressive advancements in learning 3D scenes from unstructured in-the-wild photo collections.
We introduce a new mechanism to train transient Gaussians to handle the presence of scene occluders in an unsupervised manner.
arXiv Detail & Related papers (2024-03-15T16:00:04Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - ARF: Artistic Radiance Fields [63.79314417413371]
We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
arXiv Detail & Related papers (2022-06-13T17:55:31Z) - StyleMesh: Style Transfer for Indoor 3D Scene Reconstructions [11.153966202832933]
We apply style transfer on mesh reconstructions of indoor scenes.
This enables VR applications like experiencing 3D environments painted in the style of a favorite artist.
arXiv Detail & Related papers (2021-12-02T18:59:59Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.