NeRFEditor: Differentiable Style Decomposition for Full 3D Scene Editing
- URL: http://arxiv.org/abs/2212.03848v2
- Date: Thu, 8 Dec 2022 06:02:33 GMT
- Title: NeRFEditor: Differentiable Style Decomposition for Full 3D Scene Editing
- Authors: Chunyi Sun, Yanbin Liu, Junlin Han, Stephen Gould
- Abstract summary: We present NeRFEditor, an efficient learning framework for 3D scene editing.
NeRFEditor takes a video captured over 360deg as input and outputs a high-quality, identity-preserving stylized 3D scene.
- Score: 37.06344045938838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present NeRFEditor, an efficient learning framework for 3D scene editing,
which takes a video captured over 360{\deg} as input and outputs a
high-quality, identity-preserving stylized 3D scene. Our method supports
diverse types of editing such as guided by reference images, text prompts, and
user interactions. We achieve this by encouraging a pre-trained StyleGAN model
and a NeRF model to learn from each other mutually. Specifically, we use a NeRF
model to generate numerous image-angle pairs to train an adjustor, which can
adjust the StyleGAN latent code to generate high-fidelity stylized images for
any given angle. To extrapolate editing to GAN out-of-domain views, we devise
another module that is trained in a self-supervised learning manner. This
module maps novel-view images to the hidden space of StyleGAN that allows
StyleGAN to generate stylized images on novel views. These two modules together
produce guided images in 360{\deg}views to finetune a NeRF to make stylization
effects, where a stable fine-tuning strategy is proposed to achieve this.
Experiments show that NeRFEditor outperforms prior work on benchmark and
real-world scenes with better editability, fidelity, and identity preservation.
Related papers
- Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images [54.56070204172398]
We propose a simple yet effective pipeline for stylizing a 3D scene.
We perform 3D style transfer by refining the source NeRF model using stylized images generated by a style-aligned image-to-image diffusion model.
We demonstrate that our method can transfer diverse artistic styles to real-world 3D scenes with competitive quality.
arXiv Detail & Related papers (2024-06-19T09:36:18Z) - The Devil is in the Details: StyleFeatureEditor for Detail-Rich StyleGAN Inversion and High Quality Image Editing [3.58736715327935]
We introduce StyleFeatureEditor, a novel method that enables editing in both w-latents and F-latents.
We also present a new training pipeline specifically designed to train our model to accurately edit F-latents.
Our method is compared with state-of-the-art encoding approaches, demonstrating that our model excels in terms of reconstruction quality.
arXiv Detail & Related papers (2024-06-15T11:28:32Z) - ICE-G: Image Conditional Editing of 3D Gaussian Splats [45.112689255145625]
We introduce a novel approach to quickly edit a 3D model from a single reference view.
Our technique first segments the edit image, and then matches semantically corresponding regions across chosen segmented dataset views.
A color or texture change from a particular region of the edit image can then be applied to other views automatically in a semantically sensible manner.
arXiv Detail & Related papers (2024-06-12T17:59:52Z) - NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - GenN2N: Generative NeRF2NeRF Translation [53.20986183316661]
GenN2N is a unified NeRF-to-NeRF translation framework for various NeRF translation tasks.
It employs a plug-and-play image-to-image translator to perform editing in the 2D domain and lifting 2D edits into the 3D NeRF space.
arXiv Detail & Related papers (2024-04-03T14:56:06Z) - ViCA-NeRF: View-Consistency-Aware 3D Editing of Neural Radiance Fields [45.020585071312475]
ViCA-NeRF is the first view-consistency-aware method for 3D editing with text instructions.
We exploit two sources of regularization that explicitly propagate the editing information across different views.
arXiv Detail & Related papers (2024-02-01T18:59:09Z) - Editing 3D Scenes via Text Prompts without Retraining [80.57814031701744]
DN2N is a text-driven editing method that allows for the direct acquisition of a NeRF model with universal editing capabilities.
Our method employs off-the-shelf text-based editing models of 2D images to modify the 3D scene images.
Our method achieves multiple editing types, including but not limited to appearance editing, weather transition, material changing, and style transfer.
arXiv Detail & Related papers (2023-09-10T02:31:50Z) - Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields [14.803266838721864]
Seal-3D allows users to edit NeRF models in a pixel-level and free manner with a wide range of NeRF-like backbone and preview the editing effects instantly.
A NeRF editing system is built to showcase various editing types.
arXiv Detail & Related papers (2023-07-27T18:08:19Z) - StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning [50.65015652968839]
3D scene stylization aims at generating stylized images of the scene from arbitrary novel views.
Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way.
We propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF.
arXiv Detail & Related papers (2022-05-24T16:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.