NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs
- URL: http://arxiv.org/abs/2402.08622v1
- Date: Tue, 13 Feb 2024 17:47:42 GMT
- Title: NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs
- Authors: Michael Fischer, Zhengqin Li, Thu Nguyen-Phuoc, Aljaz Bozic, Zhao
Dong, Carl Marshall, Tobias Ritschel
- Abstract summary: A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry and appearance of a scene.
We generalize classic image analogies from 2D images to NeRFs.
Our method allows exploring the mix-and-match product space of 3D geometry and appearance.
- Score: 19.17715832282524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Neural Radiance Field (NeRF) encodes the specific relation of 3D geometry
and appearance of a scene. We here ask the question whether we can transfer the
appearance from a source NeRF onto a target 3D geometry in a semantically
meaningful way, such that the resulting new NeRF retains the target geometry
but has an appearance that is an analogy to the source NeRF. To this end, we
generalize classic image analogies from 2D images to NeRFs. We leverage
correspondence transfer along semantic affinity that is driven by semantic
features from large, pre-trained 2D image models to achieve multi-view
consistent appearance transfer. Our method allows exploring the mix-and-match
product space of 3D geometry and appearance. We show that our method
outperforms traditional stylization-based methods and that a large majority of
users prefer our method over several typical baselines.
Related papers
- Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows [60.291277312569285]
We present a method for automatically modifying a NeRF representation based on a single observation.
Our method defines the transformation as a 3D flow, specifically as a weighted linear blending of rigid transformations.
We also introduce a new dataset for exploring the problem of modifying a NeRF scene through a single observation.
arXiv Detail & Related papers (2024-06-15T07:58:08Z) - 3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh
Rasterization [4.668492532161309]
We propose to use a neural radiance field (NeRF) to represent 3D human face and combine it with 2D style transfer to stylize the 3D face.
We find that directly training a NeRF on stylized images from 2D style transfer brings in 3D inconsistency issue and causes blurriness.
We propose a hybrid framework of NeRF and meshization to combine the benefits of high fidelity geometry reconstruction of NeRF and fast speed rendering of mesh.
arXiv Detail & Related papers (2023-11-22T05:24:35Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - FeatureNeRF: Learning Generalizable NeRFs by Distilling Foundation
Models [21.523836478458524]
Recent works on generalizable NeRFs have shown promising results on novel view synthesis from single or few images.
We propose a novel framework named FeatureNeRF to learn generalizable NeRFs by distilling pre-trained vision models.
Our experiments demonstrate the effectiveness of FeatureNeRF as a generalizable 3D semantic feature extractor.
arXiv Detail & Related papers (2023-03-22T17:57:01Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - Template NeRF: Towards Modeling Dense Shape Correspondences from
Category-Specific Object Images [4.662583832063716]
We present neural radiance fields (NeRF) with templates, dubbed template-NeRF, for modeling appearance and geometry.
We generate dense shape correspondences simultaneously among objects of the same category from only multi-view posed images.
The learned dense correspondences can be readily used for various image-based tasks such as keypoint detection, part segmentation, and texture transfer.
arXiv Detail & Related papers (2021-11-08T02:16:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.