BluNF: Blueprint Neural Field
- URL: http://arxiv.org/abs/2309.03933v1
- Date: Thu, 7 Sep 2023 17:53:25 GMT
- Title: BluNF: Blueprint Neural Field
- Authors: Robin Courant, Xi Wang, Marc Christie and Vicky Kalogeiton
- Abstract summary: We introduce a novel approach, called Blueprint Neural Field (BluNF), to address these editing issues.
BluNF provides a robust and user-friendly 2D blueprint, enabling intuitive scene editing.
We demonstrate BluNF's editability through an intuitive click-and-change mechanism, enabling 3D manipulations.
- Score: 10.110885977110447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) have revolutionized scene novel view
synthesis, offering visually realistic, precise, and robust implicit
reconstructions. While recent approaches enable NeRF editing, such as object
removal, 3D shape modification, or material property manipulation, the manual
annotation prior to such edits makes the process tedious. Additionally,
traditional 2D interaction tools lack an accurate sense of 3D space, preventing
precise manipulation and editing of scenes. In this paper, we introduce a novel
approach, called Blueprint Neural Field (BluNF), to address these editing
issues. BluNF provides a robust and user-friendly 2D blueprint, enabling
intuitive scene editing. By leveraging implicit neural representation, BluNF
constructs a blueprint of a scene using prior semantic and depth information.
The generated blueprint allows effortless editing and manipulation of NeRF
representations. We demonstrate BluNF's editability through an intuitive
click-and-change mechanism, enabling 3D manipulations, such as masking,
appearance modification, and object removal. Our approach significantly
contributes to visual content creation, paving the way for further research in
this area.
Related papers
- Chat-Edit-3D: Interactive 3D Scene Editing via Text Prompts [76.73043724587679]
We propose a dialogue-based 3D scene editing approach, termed CE3D.
Hash-Atlas represents 3D scene views, which transfers the editing of 3D scenes onto 2D atlas images.
Results demonstrate that CE3D effectively integrates multiple visual models to achieve diverse editing visual effects.
arXiv Detail & Related papers (2024-07-09T13:24:42Z) - SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural
Radiance Fields [7.678022563694719]
SealD-NeRF is an extension of Seal-3D for pixel-level editing in dynamic settings.
It allows for consistent edits across sequences by mapping editing actions to a specific timeframe.
arXiv Detail & Related papers (2024-02-21T03:45:18Z) - LatentEditor: Text Driven Local Editing of 3D Scenes [8.966537479017951]
We introduce textscLatentEditor, a framework for precise and locally controlled editing of neural fields using text prompts.
We successfully embed real-world scenes into the latent space, resulting in a faster and more adaptable NeRF backbone for editing.
Our approach achieves faster editing speeds and superior output quality compared to existing 3D editing models.
arXiv Detail & Related papers (2023-12-14T19:38:06Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Editing 3D Scenes via Text Prompts without Retraining [80.57814031701744]
DN2N is a text-driven editing method that allows for the direct acquisition of a NeRF model with universal editing capabilities.
Our method employs off-the-shelf text-based editing models of 2D images to modify the 3D scene images.
Our method achieves multiple editing types, including but not limited to appearance editing, weather transition, material changing, and style transfer.
arXiv Detail & Related papers (2023-09-10T02:31:50Z) - RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models [36.236190350126826]
We propose a novel framework that can take RGB images as input and alter the 3D content in neural scenes.
Specifically, we semantically select the target object and a pre-trained diffusion model will guide the NeRF model to generate new 3D objects.
Experiment results show that our algorithm is effective for editing 3D objects in NeRF under different text prompts.
arXiv Detail & Related papers (2023-06-09T04:49:31Z) - SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing
Field [37.8162035179377]
We present a novel semantic-driven NeRF editing approach, which enables users to edit a neural radiance field with a single image.
To achieve this goal, we propose a prior-guided editing field to encode fine-grained geometric and texture editing in 3D space.
Our method achieves photo-realistic 3D editing using only a single edited image, pushing the bound of semantic-driven editing in 3D real-world scenes.
arXiv Detail & Related papers (2023-03-23T13:58:11Z) - SKED: Sketch-guided Text-based 3D Editing [49.019881133348775]
We present SKED, a technique for editing 3D shapes represented by NeRFs.
Our technique utilizes as few as two guiding sketches from different views to alter an existing neural field.
We propose novel loss functions to generate the desired edits while preserving the density and radiance of the base instance.
arXiv Detail & Related papers (2023-03-19T18:40:44Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.