Stylizing 3D Scene via Implicit Representation and HyperNetwork
- URL: http://arxiv.org/abs/2105.13016v1
- Date: Thu, 27 May 2021 09:11:30 GMT
- Title: Stylizing 3D Scene via Implicit Representation and HyperNetwork
- Authors: Pei-Ze Chiang, Meng-Shiun Tsai, Hung-Yu Tseng, Wei-sheng Lai, Wei-Chen
Chiu
- Abstract summary: A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
- Score: 34.22448260525455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we aim to address the 3D scene stylization problem - generating
stylized images of the scene at arbitrary novel view angles. A straightforward
solution is to combine existing novel view synthesis and image/video style
transfer approaches, which often leads to blurry results or inconsistent
appearance. Inspired by the high quality results of the neural radiance fields
(NeRF) method, we propose a joint framework to directly render novel views with
the desired style. Our framework consists of two components: an implicit
representation of the 3D scene with the neural radiance field model, and a
hypernetwork to transfer the style information into the scene representation.
In particular, our implicit representation model disentangles the scene into
the geometry and appearance branches, and the hypernetwork learns to predict
the parameters of the appearance branch from the reference style image. To
alleviate the training difficulties and memory burden, we propose a two-stage
training procedure and a patch sub-sampling approach to optimize the style and
content losses with the neural radiance field model. After optimization, our
model is able to render consistent novel views at arbitrary view angles with
arbitrary style. Both quantitative evaluation and human subject study have
demonstrated that the proposed method generates faithful stylization results
with consistent appearance across different views.
Related papers
- Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Instant Photorealistic Neural Radiance Fields Stylization [1.039189397779466]
We present Instant Neural Radiance Fields Stylization, a novel approach for multi-view image stylization for the 3D scene.
Our approach models a neural radiance field based on neural graphics primitives, which use a hash table-based position encoder for position embedding.
Our method can generate stylized novel views with a consistent appearance at various view angles in less than 10 minutes on modern GPU hardware.
arXiv Detail & Related papers (2023-03-29T17:53:20Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - SNeRF: Stylized Neural Implicit Representations for 3D Scenes [9.151746397358522]
This paper investigates 3D scene stylization that provides a strong inductive bias for consistent novel view synthesis.
We adopt the emerging neural radiance fields (NeRF) as our choice of 3D scene representation.
We introduce a new training method to address this problem by alternating the NeRF and stylization optimization steps.
arXiv Detail & Related papers (2022-07-05T23:45:02Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Unified Implicit Neural Stylization [80.59831861186227]
This work explores a new intriguing direction: training a stylized implicit representation.
We conduct a pilot study on a variety of implicit functions, including 2D coordinate-based representation, neural radiance field, and signed distance function.
Our solution is a Unified Implicit Neural Stylization framework, dubbed INS.
arXiv Detail & Related papers (2022-04-05T02:37:39Z) - Learning to Stylize Novel Views [82.24095446809946]
We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views.
We propose a point cloud-based method for consistent 3D scene stylization.
arXiv Detail & Related papers (2021-05-27T23:58:18Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.