Instant Photorealistic Neural Radiance Fields Stylization
- URL: http://arxiv.org/abs/2303.16884v2
- Date: Tue, 2 Jul 2024 10:48:32 GMT
- Title: Instant Photorealistic Neural Radiance Fields Stylization
- Authors: Shaoxu Li, Ye Pan,
- Abstract summary: We present Instant Neural Radiance Fields Stylization, a novel approach for multi-view image stylization for the 3D scene.
Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding.
Our method can generate stylized novel views with a consistent appearance at various view angles in less than 10 minutes on modern GPU hardware.
- Score: 1.039189397779466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Instant Neural Radiance Fields Stylization, a novel approach for multi-view image stylization for the 3D scene. Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding. We split the position encoder into two parts, the content and style sub-branches, and train the network for normal novel view image synthesis with the content and style targets. In the inference stage, we execute AdaIN to the output features of the position encoder, with content and style voxel grid features as reference. With the adjusted features, the stylization of novel view images could be obtained. Our method extends the style target from style images to image sets of scenes and does not require additional network training for stylization. Given a set of images of 3D scenes and a style target(a style image or another set of 3D scenes), our method can generate stylized novel views with a consistent appearance at various view angles in less than 10 minutes on modern GPU hardware. Extensive experimental results demonstrate the validity and superiority of our method.
Related papers
- ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis [63.169364481672915]
We propose textbfViewCrafter, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images.
Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames.
arXiv Detail & Related papers (2024-09-03T16:53:19Z) - Stylizing Sparse-View 3D Scenes with Hierarchical Neural Representation [0.0]
A surge of 3D style transfer methods has been proposed that leverage the scene reconstruction power of a pre-trained neural radiance field (NeRF)
In this paper, we consider the stylization of sparse-view scenes in terms of disentangling content semantics and style textures.
A novel hierarchical encoding-based neural representation is designed to generate high-quality stylized scenes directly from implicit scene representations.
arXiv Detail & Related papers (2024-04-08T07:01:42Z) - Towards 4D Human Video Stylization [56.33756124829298]
We present a first step towards 4D (3D and time) human video stylization, which addresses style transfer, novel view synthesis and human animation.
We leverage Neural Radiance Fields (NeRFs) to represent videos, conducting stylization in the rendered feature space.
Our framework uniquely extends its capabilities to accommodate novel poses and viewpoints, making it a versatile tool for creative human video stylization.
arXiv Detail & Related papers (2023-12-07T08:58:33Z) - Locally Stylized Neural Radiance Fields [30.037649804991315]
We propose a stylization framework for neural radiance fields (NeRF) based on local style transfer.
In particular, we use a hash-grid encoding to learn the embedding of the appearance and geometry components.
We show that our method yields plausible stylization results with novel view synthesis.
arXiv Detail & Related papers (2023-09-19T15:08:10Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Unified Implicit Neural Stylization [80.59831861186227]
This work explores a new intriguing direction: training a stylized implicit representation.
We conduct a pilot study on a variety of implicit functions, including 2D coordinate-based representation, neural radiance field, and signed distance function.
Our solution is a Unified Implicit Neural Stylization framework, dubbed INS.
arXiv Detail & Related papers (2022-04-05T02:37:39Z) - Learning to Stylize Novel Views [82.24095446809946]
We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
arXiv Detail & Related papers (2021-05-27T23:58:18Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.