Editing Conditional Radiance Fields
- URL: http://arxiv.org/abs/2105.06466v1
- Date: Thu, 13 May 2021 17:59:48 GMT
- Title: Editing Conditional Radiance Fields
- Authors: Steven Liu, Xiuming Zhang, Zhoutong Zhang, Richard Zhang, Jun-Yan Zhu,
Bryan Russell
- Abstract summary: A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene.
In this paper, we explore enabling user editing of a category-level NeRF trained on a shape category.
We introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region.
- Score: 40.685602081728554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A neural radiance field (NeRF) is a scene model supporting high-quality view
synthesis, optimized per scene. In this paper, we explore enabling user editing
of a category-level NeRF - also known as a conditional radiance field - trained
on a shape category. Specifically, we introduce a method for propagating coarse
2D user scribbles to the 3D space, to modify the color or shape of a local
region. First, we propose a conditional radiance field that incorporates new
modular network components, including a shape branch that is shared across
object instances. Observing multiple instances of the same category, our model
learns underlying part semantics without any supervision, thereby allowing the
propagation of coarse 2D user scribbles to the entire 3D region (e.g., chair
seat). Next, we propose a hybrid network update strategy that targets specific
network components, which balances efficiency and accuracy. During user
interaction, we formulate an optimization problem that both satisfies the
user's constraints and preserves the original object structure. We demonstrate
our approach on various editing tasks over three shape datasets and show that
it outperforms prior neural editing approaches. Finally, we edit the appearance
and shape of a real photograph and show that the edit propagates to
extrapolated novel views.
Related papers
- CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization [56.47175002368553]
This paper introduces a new approach based on a coupled representation and a neural volume optimization to implicitly perform 3D shape editing in latent space.
First, we design the coupled neural shape representation for supporting 3D shape editing.
Second, we formulate the coupled neural shape optimization procedure to co-optimize the two coupled components in the representation subject to the editing operation.
arXiv Detail & Related papers (2024-02-04T01:52:56Z) - SERF: Fine-Grained Interactive 3D Segmentation and Editing with Radiance Fields [92.14328581392633]
We introduce a novel fine-grained interactive 3D segmentation and editing algorithm with radiance fields, which we refer to as SERF.
Our method entails creating a neural mesh representation by integrating multi-view algorithms with pre-trained 2D models.
Building upon this representation, we introduce a novel surface rendering technique that preserves local information and is robust to deformation.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Mesh-Guided Neural Implicit Field Editing [42.78979161815414]
We propose a new approach that employs a mesh as a guiding mechanism in editing the neural field.
We first introduce a differentiable method using marching tetrahedra for polygonal mesh extraction from the neural implicit field.
We then design a differentiable color extractor to assign colors obtained from the volume renderings to this extracted mesh.
This differentiable colored mesh allows gradient back-propagation from the explicit mesh to the implicit fields, empowering users to easily manipulate the geometry and color of neural implicit fields.
arXiv Detail & Related papers (2023-12-04T18:59:58Z) - Neural Impostor: Editing Neural Radiance Fields with Explicit Shape
Manipulation [49.852533321916844]
We introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field.
Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding.
We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data.
arXiv Detail & Related papers (2023-10-09T04:07:00Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - Decomposing NeRF for Editing via Feature Field Distillation [14.628761232614762]
editing a scene represented by a NeRF is challenging as the underlying connectionist representations are not object-centric or compositional.
In this work, we tackle the problem of semantic scene decomposition of NeRFs to enable query-based local editing.
We propose to distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors into a 3D feature field optimized in parallel to the radiance field.
arXiv Detail & Related papers (2022-05-31T07:56:09Z) - NeRF-Editing: Geometry Editing of Neural Radiance Fields [43.256317094173795]
Implicit neural rendering has shown great potential in novel view synthesis of a scene.
We propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene.
Our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
arXiv Detail & Related papers (2022-05-10T15:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.