NeRF-Editing: Geometry Editing of Neural Radiance Fields
- URL: http://arxiv.org/abs/2205.04978v1
- Date: Tue, 10 May 2022 15:35:52 GMT
- Title: NeRF-Editing: Geometry Editing of Neural Radiance Fields
- Authors: Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, Lin
Gao
- Abstract summary: Implicit neural rendering has shown great potential in novel view synthesis of a scene.
We propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene.
Our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
- Score: 43.256317094173795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural rendering, especially Neural Radiance Field (NeRF), has shown
great potential in novel view synthesis of a scene. However, current NeRF-based
methods cannot enable users to perform user-controlled shape deformation in the
scene. While existing works have proposed some approaches to modify the
radiance field according to the user's constraints, the modification is limited
to color editing or object translation and rotation. In this paper, we propose
a method that allows users to perform controllable shape deformation on the
implicit representation of the scene, and synthesizes the novel view images of
the edited scene without re-training the network. Specifically, we establish a
correspondence between the extracted explicit mesh representation and the
implicit neural representation of the target scene. Users can first utilize
well-developed mesh-based deformation methods to deform the mesh representation
of the scene. Our method then utilizes user edits from the mesh representation
to bend the camera rays by introducing a tetrahedra mesh as a proxy, obtaining
the rendering results of the edited scene. Extensive experiments demonstrate
that our framework can achieve ideal editing results not only on synthetic
data, but also on real scenes captured by users.
Related papers
- Mesh-Guided Neural Implicit Field Editing [42.78979161815414]
We propose a new approach that employs a mesh as a guiding mechanism in editing the neural field.
We first introduce a differentiable method using marching tetrahedra for polygonal mesh extraction from the neural implicit field.
We then design a differentiable color extractor to assign colors obtained from the volume renderings to this extracted mesh.
This differentiable colored mesh allows gradient back-propagation from the explicit mesh to the implicit fields, empowering users to easily manipulate the geometry and color of neural implicit fields.
arXiv Detail & Related papers (2023-12-04T18:59:58Z) - Neural Impostor: Editing Neural Radiance Fields with Explicit Shape
Manipulation [49.852533321916844]
We introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field.
Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding.
We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data.
arXiv Detail & Related papers (2023-10-09T04:07:00Z) - SKED: Sketch-guided Text-based 3D Editing [49.019881133348775]
We present SKED, a technique for editing 3D shapes represented by NeRFs.
Our technique utilizes as few as two guiding sketches from different views to alter an existing neural field.
We propose novel loss functions to generate the desired edits while preserving the density and radiance of the base instance.
arXiv Detail & Related papers (2023-03-19T18:40:44Z) - EditableNeRF: Editing Topologically Varying Neural Radiance Fields by
Key Points [7.4100592531979625]
We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes.
Our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points.
Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence.
arXiv Detail & Related papers (2022-12-07T06:08:03Z) - Deforming Radiance Fields with Cages [65.57101724686527]
We propose a new type of deformation of the radiance field: free-form radiance field deformation.
We use a triangular mesh that encloses the foreground object called cage as an interface.
We propose a novel formulation to extend it to the radiance field, which maps the position and the view direction of the sampling points from the deformed space to the canonical space.
arXiv Detail & Related papers (2022-07-25T16:08:55Z) - NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for
Geometry and Texture Editing [39.71252429542249]
We present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices.
We develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation.
Experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability.
arXiv Detail & Related papers (2022-07-25T05:30:50Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Editing Conditional Radiance Fields [40.685602081728554]
A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene.
In this paper, we explore enabling user editing of a category-level NeRF trained on a shape category.
We introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region.
arXiv Detail & Related papers (2021-05-13T17:59:48Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.