4D-Editor: Interactive Object-level Editing in Dynamic Neural Radiance
Fields via Semantic Distillation
- URL: http://arxiv.org/abs/2310.16858v2
- Date: Mon, 6 Nov 2023 03:38:44 GMT
- Title: 4D-Editor: Interactive Object-level Editing in Dynamic Neural Radiance
Fields via Semantic Distillation
- Authors: Dadong Jiang, Zhihui Ke, Xiaobo Zhou, Xidong Shi
- Abstract summary: We propose 4D-Editor, an interactive semantic-driven editing framework, for editing dynamic NeRFs.
We propose an extension to the original dynamic NeRF by incorporating a hybrid semantic feature distillation to maintain spatial-temporal consistency after editing.
In addition, we develop Multi-view Reprojection Inpainting to fill holes caused by incomplete scene capture after editing.
- Score: 2.027159474140712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper targets interactive object-level editing (e.g., deletion,
recoloring, transformation, composition) in dynamic scenes. Recently, some
methods aiming for flexible editing static scenes represented by neural
radiance field (NeRF) have shown impressive synthesis quality, while similar
capabilities in time-variant dynamic scenes remain limited. To solve this
problem, we propose 4D-Editor, an interactive semantic-driven editing
framework, allowing editing multiple objects in a dynamic NeRF with user
strokes on a single frame. We propose an extension to the original dynamic NeRF
by incorporating a hybrid semantic feature distillation to maintain
spatial-temporal consistency after editing. In addition, we design Recursive
Selection Refinement that significantly boosts object segmentation accuracy
within a dynamic NeRF to aid the editing process. Moreover, we develop
Multi-view Reprojection Inpainting to fill holes caused by incomplete scene
capture after editing. Extensive experiments and editing examples on real-world
demonstrate that 4D-Editor achieves photo-realistic editing on dynamic NeRFs.
Project page: https://patrickddj.github.io/4D-Editor
Related papers
- NeRF-Insert: 3D Local Editing with Multimodal Control Signals [97.91172669905578]
NeRF-Insert is a NeRF editing framework that allows users to make high-quality local edits with a flexible level of control.
We cast scene editing as an in-painting problem, which encourages the global structure of the scene to be preserved.
Our results show better visual quality and also maintain stronger consistency with the original NeRF.
arXiv Detail & Related papers (2024-04-30T02:04:49Z) - DATENeRF: Depth-Aware Text-based Editing of NeRFs [49.08848777124736]
We introduce an inpainting approach that leverages the depth information of NeRF scenes to distribute 2D edits across different images.
Our results reveal that this methodology achieves more consistent, lifelike, and detailed edits than existing leading methods for text-driven NeRF scene editing.
arXiv Detail & Related papers (2024-04-06T06:48:16Z) - SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields [7.678022563694719]
SealD-NeRF is an extension of Seal-3D for pixel-level editing in dynamic settings.
It allows for consistent edits across sequences by mapping editing actions to a specific timeframe.
arXiv Detail & Related papers (2024-02-21T03:45:18Z) - DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image
Editing [66.43179841884098]
Large-scale Text-to-Image (T2I) diffusion models have revolutionized image generation over the last few years.
We propose DiffEditor to rectify two weaknesses in existing diffusion-based image editing.
Our method can efficiently achieve state-of-the-art performance on various fine-grained image editing tasks.
arXiv Detail & Related papers (2024-02-04T18:50:29Z) - Customize your NeRF: Adaptive Source Driven 3D Scene Editing via
Local-Global Iterative Training [61.984277261016146]
We propose a CustomNeRF model that unifies a text description or a reference image as the editing prompt.
To tackle the first challenge, we propose a Local-Global Iterative Editing (LGIE) training scheme that alternates between foreground region editing and full-image editing.
For the second challenge, we also design a class-guided regularization that exploits class priors within the generation model to alleviate the inconsistency problem.
arXiv Detail & Related papers (2023-12-04T06:25:06Z) - ProteusNeRF: Fast Lightweight NeRF Editing using 3D-Aware Image Context [26.07841568311428]
We present a very simple but effective neural network architecture that is fast and efficient while maintaining a low memory footprint.
Our representation allows straightforward object selection via semantic feature distillation at the training stage.
We propose a local 3D-aware image context to facilitate view-consistent image editing that can then be distilled into fine-tuned NeRFs.
arXiv Detail & Related papers (2023-10-15T21:54:45Z) - Neural Impostor: Editing Neural Radiance Fields with Explicit Shape
Manipulation [49.852533321916844]
We introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field.
Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding.
We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data.
arXiv Detail & Related papers (2023-10-09T04:07:00Z) - Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields [14.803266838721864]
Seal-3D allows users to edit NeRF models in a pixel-level and free manner with a wide range of NeRF-like backbone and preview the editing effects instantly.
A NeRF editing system is built to showcase various editing types.
arXiv Detail & Related papers (2023-07-27T18:08:19Z) - NeuralEditor: Editing Neural Radiance Fields via Manipulating Point
Clouds [23.397546605447285]
This paper proposes NeuralEditor that enables neural radiance fields (NeRFs) to be editable for general shape editing tasks.
Our key insight is to exploit the explicit point cloud representation as the underlying structure to construct NeRFs.
NeuralEditor achieves state-of-the-art performance in both shape deformation and scene morphing tasks.
arXiv Detail & Related papers (2023-05-04T17:59:40Z) - SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing
Field [37.8162035179377]
We present a novel semantic-driven NeRF editing approach, which enables users to edit a neural radiance field with a single image.
To achieve this goal, we propose a prior-guided editing field to encode fine-grained geometric and texture editing in 3D space.
Our method achieves photo-realistic 3D editing using only a single edited image, pushing the bound of semantic-driven editing in 3D real-world scenes.
arXiv Detail & Related papers (2023-03-23T13:58:11Z) - EditGAN: High-Precision Semantic Image Editing [120.49401527771067]
EditGAN is a novel method for high quality, high precision semantic image editing.
We show that EditGAN can manipulate images with an unprecedented level of detail and freedom.
We can also easily combine multiple edits and perform plausible edits beyond EditGAN training data.
arXiv Detail & Related papers (2021-11-04T22:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.