G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles
- URL: http://arxiv.org/abs/2408.13508v1
- Date: Sat, 24 Aug 2024 08:04:19 GMT
- Title: G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles
- Authors: Adil Meric, Umut Kocasari, Matthias Nießner, Barbara Roessle,
- Abstract summary: Existing methods for 3D style transfer need extensive per-scene optimization for single or multiple styles.
In this work, we overcome the limitations of existing methods by rendering stylized novel views from a NeRF without the need for per-scene or per-style optimization.
Our findings demonstrate that this approach achieves a good visual quality comparable to that of per-scene methods.
- Score: 45.92812062685523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) have emerged as a powerful tool for creating highly detailed and photorealistic scenes. Existing methods for NeRF-based 3D style transfer need extensive per-scene optimization for single or multiple styles, limiting the applicability and efficiency of 3D style transfer. In this work, we overcome the limitations of existing methods by rendering stylized novel views from a NeRF without the need for per-scene or per-style optimization. To this end, we take advantage of a generalizable NeRF model to facilitate style transfer in 3D, thereby enabling the use of a single learned model across various scenes. By incorporating a hypernetwork into a generalizable NeRF, our approach enables on-the-fly generation of stylized novel views. Moreover, we introduce a novel flow-based multi-view consistency loss to preserve consistency across multiple views. We evaluate our method across various scenes and artistic styles and show its performance in generating high-quality and multi-view consistent stylized images without the need for a scene-specific implicit model. Our findings demonstrate that this approach not only achieves a good visual quality comparable to that of per-scene methods but also significantly enhances efficiency and applicability, marking a notable advancement in the field of 3D style transfer.
Related papers
- ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis [63.169364481672915]
We propose textbfViewCrafter, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images.
Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames.
arXiv Detail & Related papers (2024-09-03T16:53:19Z) - Ada-adapter:Fast Few-shot Style Personlization of Diffusion Model with Pre-trained Image Encoder [57.574544285878794]
Ada-Adapter is a novel framework for few-shot style personalization of diffusion models.
Our method enables efficient zero-shot style transfer utilizing a single reference image.
We demonstrate the effectiveness of our approach on various artistic styles, including flat art, 3D rendering, and logo design.
arXiv Detail & Related papers (2024-07-08T02:00:17Z) - Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face Synthesis [11.463969116010183]
ArtNeRF is a novel face stylization framework derived from 3D-aware GAN.
We propose an expressive generator to synthesize stylized faces and a triple-branch discriminator module to improve style consistency.
Experiments demonstrate that ArtNeRF is versatile in generating high-quality 3D-aware cartoon faces with arbitrary styles.
arXiv Detail & Related papers (2024-04-21T16:45:35Z) - Gaussian Splatting in Style [32.41970914897462]
3D sceneization extends the work of neural style transfer to 3D.
A vital challenge in this problem is to maintain the uniformity of the stylized appearance across multiple views.
We propose a novel architecture trained on a collection of style images that, at test time, produces real time high-quality stylized novel views.
arXiv Detail & Related papers (2024-03-13T13:06:31Z) - FPRF: Feed-Forward Photorealistic Style Transfer of Large-Scale 3D
Neural Radiance Fields [23.705795612467956]
FPRF stylizes large-scale 3D scenes with arbitrary, multiple style reference images without additional optimization.
FPRF achieves favorable photorealistic quality 3D scene stylization for large-scale scenes with diverse reference images.
arXiv Detail & Related papers (2024-01-10T19:27:28Z) - Towards 4D Human Video Stylization [56.33756124829298]
We present a first step towards 4D (3D and time) human video stylization, which addresses style transfer, novel view synthesis and human animation.
We leverage Neural Radiance Fields (NeRFs) to represent videos, conducting stylization in the rendered feature space.
Our framework uniquely extends its capabilities to accommodate novel poses and viewpoints, making it a versatile tool for creative human video stylization.
arXiv Detail & Related papers (2023-12-07T08:58:33Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - NeRF-Art: Text-Driven Neural Radiance Fields Stylization [38.3724634394761]
We present NeRF-Art, a text-guided NeRF stylization approach that manipulates the style of a pre-trained NeRF model with a simple text prompt.
We show that our method is effective and robust regarding both single-view stylization quality and cross-view consistency.
arXiv Detail & Related papers (2022-12-15T18:59:58Z) - SNeRF: Stylized Neural Implicit Representations for 3D Scenes [9.151746397358522]
This paper investigates 3D scene stylization that provides a strong inductive bias for consistent novel view synthesis.
We adopt the emerging neural radiance fields (NeRF) as our choice of 3D scene representation.
We introduce a new training method to address this problem by alternating the NeRF and stylization optimization steps.
arXiv Detail & Related papers (2022-07-05T23:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.