FPRF: Feed-Forward Photorealistic Style Transfer of Large-Scale 3D
Neural Radiance Fields
- URL: http://arxiv.org/abs/2401.05516v1
- Date: Wed, 10 Jan 2024 19:27:28 GMT
- Title: FPRF: Feed-Forward Photorealistic Style Transfer of Large-Scale 3D
Neural Radiance Fields
- Authors: GeonU Kim, Kim Youwang, Tae-Hyun Oh
- Abstract summary: FPRF stylizes large-scale 3D scenes with arbitrary, multiple style reference images without additional optimization.
FPRF achieves favorable photorealistic quality 3D scene stylization for large-scale scenes with diverse reference images.
- Score: 23.705795612467956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present FPRF, a feed-forward photorealistic style transfer method for
large-scale 3D neural radiance fields. FPRF stylizes large-scale 3D scenes with
arbitrary, multiple style reference images without additional optimization
while preserving multi-view appearance consistency. Prior arts required tedious
per-style/-scene optimization and were limited to small-scale 3D scenes. FPRF
efficiently stylizes large-scale 3D scenes by introducing a style-decomposed 3D
neural radiance field, which inherits AdaIN's feed-forward stylization
machinery, supporting arbitrary style reference images. Furthermore, FPRF
supports multi-reference stylization with the semantic correspondence matching
and local AdaIN, which adds diverse user control for 3D scene styles. FPRF also
preserves multi-view consistency by applying semantic matching and style
transfer processes directly onto queried features in 3D space. In experiments,
we demonstrate that FPRF achieves favorable photorealistic quality 3D scene
stylization for large-scale scenes with diverse reference images. Project page:
https://kim-geonu.github.io/FPRF/
Related papers
- G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles [45.92812062685523]
Existing methods for 3D style transfer need extensive per-scene optimization for single or multiple styles.
In this work, we overcome the limitations of existing methods by rendering stylized novel views from a NeRF without the need for per-scene or per-style optimization.
Our findings demonstrate that this approach achieves a good visual quality comparable to that of per-scene methods.
arXiv Detail & Related papers (2024-08-24T08:04:19Z) - Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - Gaussian Splatting in Style [32.41970914897462]
3D sceneization extends the work of neural style transfer to 3D.
A vital challenge in this problem is to maintain the uniformity of the stylized appearance across multiple views.
We propose a novel architecture trained on a collection of style images that, at test time, produces real time high-quality stylized novel views.
arXiv Detail & Related papers (2024-03-13T13:06:31Z) - 3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh
Rasterization [4.668492532161309]
We propose to use a neural radiance field (NeRF) to represent 3D human face and combine it with 2D style transfer to stylize the 3D face.
We find that directly training a NeRF on stylized images from 2D style transfer brings in 3D inconsistency issue and causes blurriness.
We propose a hybrid framework of NeRF and meshization to combine the benefits of high fidelity geometry reconstruction of NeRF and fast speed rendering of mesh.
arXiv Detail & Related papers (2023-11-22T05:24:35Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields [52.19291190355375]
StyleRF (Style Radiance Fields) is an innovative 3D style transfer technique.
It employs an explicit grid of high-level features to represent 3D scenes, with which high-fidelity geometry can be reliably restored via volume rendering.
It transforms the grid features according to the reference style which directly leads to high-quality zero-shot style transfer.
arXiv Detail & Related papers (2023-03-19T08:26:06Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - StyleNeRF: A Style-based 3D-Aware Generator for High-resolution Image
Synthesis [92.25145204543904]
StyleNeRF is a 3D-aware generative model for high-resolution image synthesis with high multi-view consistency.
It integrates the neural radiance field (NeRF) into a style-based generator.
It can synthesize high-resolution images at interactive rates while preserving 3D consistency at high quality.
arXiv Detail & Related papers (2021-10-18T02:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.