ARF: Artistic Radiance Fields
- URL: http://arxiv.org/abs/2206.06360v1
- Date: Mon, 13 Jun 2022 17:55:31 GMT
- Title: ARF: Artistic Radiance Fields
- Authors: Kai Zhang and Nick Kolkin and Sai Bi and Fujun Luan and Zexiang Xu and
Eli Shechtman and Noah Snavely
- Abstract summary: We present a method for transferring the artistic features of an arbitrary style image to a 3D scene.
Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors.
We propose to stylize the more robust radiance field representation.
- Score: 63.79314417413371
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for transferring the artistic features of an arbitrary
style image to a 3D scene. Previous methods that perform 3D stylization on
point clouds or meshes are sensitive to geometric reconstruction errors for
complex real-world scenes. Instead, we propose to stylize the more robust
radiance field representation. We find that the commonly used Gram matrix-based
loss tends to produce blurry results without faithful brushstrokes, and
introduce a nearest neighbor-based loss that is highly effective at capturing
style details while maintaining multi-view consistency. We also propose a novel
deferred back-propagation method to enable optimization of memory-intensive
radiance fields using style losses defined on full-resolution rendered images.
Our extensive evaluation demonstrates that our method outperforms baselines by
generating artistic appearance that more closely resembles the style image.
Please check our project page for video results and open-source
implementations: https://www.cs.cornell.edu/projects/arf/ .
Related papers
- G-Style: Stylized Gaussian Splatting [5.363168481735954]
We introduce G-Style, a novel algorithm designed to transfer the style of an image onto a 3D scene represented using Gaussian Splatting.
G-Style generates high-quality stylizations within just a few minutes, outperforming existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2024-08-28T10:43:42Z) - MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References [49.71130133080821]
MaRINeR is a refinement method that leverages information of a nearby mapping image to improve the rendering of a target viewpoint.
We show improved renderings in quantitative metrics and qualitative examples from both explicit and implicit scene representations.
arXiv Detail & Related papers (2024-07-18T17:50:03Z) - Stylizing Sparse-View 3D Scenes with Hierarchical Neural Representation [0.0]
A surge of 3D style transfer methods has been proposed that leverage the scene reconstruction power of a pre-trained neural radiance field (NeRF)
In this paper, we consider the stylization of sparse-view scenes in terms of disentangling content semantics and style textures.
A novel hierarchical encoding-based neural representation is designed to generate high-quality stylized scenes directly from implicit scene representations.
arXiv Detail & Related papers (2024-04-08T07:01:42Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - ERF: Explicit Radiance Field Reconstruction From Scratch [12.254150867994163]
We propose a novel explicit dense 3D reconstruction approach that processes a set of images of a scene with sensor poses and calibrations and estimates a photo-real digital model.
One of the key innovations is that the underlying volumetric representation is completely explicit.
We show that our method is general and practical. It does not require a highly controlled lab setup for capturing, but allows for reconstructing scenes with a vast variety of objects.
arXiv Detail & Related papers (2022-02-28T19:37:12Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Stylizing 3D Scene via Implicit Representation and HyperNetwork [34.22448260525455]
A straightforward solution is to combine existing novel view synthesis and image/video style transfer approaches.
Inspired by the high quality results of the neural radiance fields (NeRF) method, we propose a joint framework to directly render novel views with the desired style.
Our framework consists of two components: an implicit representation of the 3D scene with the neural radiance field model, and a hypernetwork to transfer the style information into the scene representation.
arXiv Detail & Related papers (2021-05-27T09:11:30Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.