Mesh-Guided Neural Implicit Field Editing
- URL: http://arxiv.org/abs/2312.02157v1
- Date: Mon, 4 Dec 2023 18:59:58 GMT
- Title: Mesh-Guided Neural Implicit Field Editing
- Authors: Can Wang and Mingming He and Menglei Chai and Dongdong Chen and Jing
Liao
- Abstract summary: We propose a new approach that employs a mesh as a guiding mechanism in editing the neural field.
We first introduce a differentiable method using marching tetrahedra for polygonal mesh extraction from the neural implicit field.
We then design a differentiable color extractor to assign colors obtained from the volume renderings to this extracted mesh.
This differentiable colored mesh allows gradient back-propagation from the explicit mesh to the implicit fields, empowering users to easily manipulate the geometry and color of neural implicit fields.
- Score: 42.78979161815414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural implicit fields have emerged as a powerful 3D representation for
reconstructing and rendering photo-realistic views, yet they possess limited
editability. Conversely, explicit 3D representations, such as polygonal meshes,
offer ease of editing but may not be as suitable for rendering high-quality
novel views. To harness the strengths of both representations, we propose a new
approach that employs a mesh as a guiding mechanism in editing the neural
radiance field. We first introduce a differentiable method using marching
tetrahedra for polygonal mesh extraction from the neural implicit field and
then design a differentiable color extractor to assign colors obtained from the
volume renderings to this extracted mesh. This differentiable colored mesh
allows gradient back-propagation from the explicit mesh to the implicit fields,
empowering users to easily manipulate the geometry and color of neural implicit
fields. To enhance user control from coarse-grained to fine-grained levels, we
introduce an octree-based structure into its optimization. This structure
prioritizes the edited regions and the surface part, making our method achieve
fine-grained edits to the neural implicit field and accommodate various user
modifications, including object additions, component removals, specific area
deformations, and adjustments to local and global colors. Through extensive
experiments involving diverse scenes and editing operations, we have
demonstrated the capabilities and effectiveness of our method. Our project page
is: \url{https://cassiepython.github.io/MNeuEdit/}
Related papers
- Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Neural Impostor: Editing Neural Radiance Fields with Explicit Shape
Manipulation [49.852533321916844]
We introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field.
Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding.
We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data.
arXiv Detail & Related papers (2023-10-09T04:07:00Z) - UVA: Towards Unified Volumetric Avatar for View Synthesis, Pose
rendering, Geometry and Texture Editing [83.0396740127043]
We propose a new approach named textbfUnified textbfVolumetric textbfAvatar (textbfUVA) that enables local editing of both geometry and texture.
UVA transforms each observation point to a canonical space using a skinning motion field and represents geometry and texture in separate neural fields.
Experiments on multiple human avatars demonstrate that our UVA achieves novel view synthesis and novel pose rendering.
arXiv Detail & Related papers (2023-04-14T07:39:49Z) - SKED: Sketch-guided Text-based 3D Editing [49.019881133348775]
We present SKED, a technique for editing 3D shapes represented by NeRFs.
Our technique utilizes as few as two guiding sketches from different views to alter an existing neural field.
We propose novel loss functions to generate the desired edits while preserving the density and radiance of the base instance.
arXiv Detail & Related papers (2023-03-19T18:40:44Z) - PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields [60.66412075837952]
We present PaletteNeRF, a novel method for appearance editing of neural radiance fields (NeRF) based on 3D color decomposition.
Our method decomposes the appearance of each 3D point into a linear combination of palette-based bases.
We extend our framework with compressed semantic features for semantic-aware appearance editing.
arXiv Detail & Related papers (2022-12-21T00:20:01Z) - NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for
Geometry and Texture Editing [39.71252429542249]
We present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices.
We develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation.
Experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability.
arXiv Detail & Related papers (2022-07-25T05:30:50Z) - NeRF-Editing: Geometry Editing of Neural Radiance Fields [43.256317094173795]
Implicit neural rendering has shown great potential in novel view synthesis of a scene.
We propose a method that allows users to perform controllable shape deformation on the implicit representation of the scene.
Our framework can achieve ideal editing results not only on synthetic data, but also on real scenes captured by users.
arXiv Detail & Related papers (2022-05-10T15:35:52Z) - Editing Conditional Radiance Fields [40.685602081728554]
A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene.
In this paper, we explore enabling user editing of a category-level NeRF trained on a shape category.
We introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region.
arXiv Detail & Related papers (2021-05-13T17:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.