CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization
- URL: http://arxiv.org/abs/2402.02313v1
- Date: Sun, 4 Feb 2024 01:52:56 GMT
- Title: CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization
- Authors: Jingyu Hu, Ka-Hei Hui, Zhengzhe Liu, Hao Zhang, Chi-Wing Fu
- Abstract summary: This paper introduces a new approach based on a coupled representation and a neural volume optimization to implicitly perform 3D shape editing in latent space.
First, we design the coupled neural shape representation for supporting 3D shape editing.
Second, we formulate the coupled neural shape optimization procedure to co-optimize the two coupled components in the representation subject to the editing operation.
- Score: 56.47175002368553
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a new approach based on a coupled representation and a
neural volume optimization to implicitly perform 3D shape editing in latent
space. This work has three innovations. First, we design the coupled neural
shape (CNS) representation for supporting 3D shape editing. This representation
includes a latent code, which captures high-level global semantics of the
shape, and a 3D neural feature volume, which provides a spatial context to
associate with the local shape changes given by the editing. Second, we
formulate the coupled neural shape optimization procedure to co-optimize the
two coupled components in the representation subject to the editing operation.
Last, we offer various 3D shape editing operators, i.e., copy, resize, delete,
and drag, and derive each into an objective for guiding the CNS optimization,
such that we can iteratively co-optimize the latent code and neural feature
volume to match the editing target. With our approach, we can achieve a rich
variety of editing results that are not only aware of the shape semantics but
are also not easy to achieve by existing approaches. Both quantitative and
qualitative evaluations demonstrate the strong capabilities of our approach
over the state-of-the-art solutions.
Related papers
- SERF: Fine-Grained Interactive 3D Segmentation and Editing with Radiance Fields [92.14328581392633]
We introduce a novel fine-grained interactive 3D segmentation and editing algorithm with radiance fields, which we refer to as SERF.
Our method entails creating a neural mesh representation by integrating multi-view algorithms with pre-trained 2D models.
Building upon this representation, we introduce a novel surface rendering technique that preserves local information and is robust to deformation.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - Neural Impostor: Editing Neural Radiance Fields with Explicit Shape
Manipulation [49.852533321916844]
We introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field.
Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding.
We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data.
arXiv Detail & Related papers (2023-10-09T04:07:00Z) - 3Deformer: A Common Framework for Image-Guided Mesh Deformation [27.732389685912214]
Given a source 3D mesh with semantic materials, and a user-specified semantic image, 3Deformer can accurately edit the source mesh.
Our 3Deformer is able to produce impressive results and reaches the state-of-the-art level.
arXiv Detail & Related papers (2023-07-19T10:44:44Z) - Fast-SNARF: A Fast Deformer for Articulated Neural Fields [92.68788512596254]
We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space.
Fast-SNARF is a drop-in replacement in to our previous work, SNARF, while significantly improving its computational efficiency.
Because learning of deformation maps is a crucial component in many 3D human avatar methods, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
arXiv Detail & Related papers (2022-11-28T17:55:34Z) - Dual Octree Graph Networks for Learning Adaptive Volumetric Shape
Representations [21.59311861556396]
Our method encodes the volumetric field of a 3D shape with an adaptive feature volume organized by an octree.
An encoder-decoder network is designed to learn the adaptive feature volume based on the graph convolutions over the dual graph of octree nodes.
Our method effectively encodes shape details, enables fast 3D shape reconstruction, and exhibits good generality for modeling 3D shapes out of training categories.
arXiv Detail & Related papers (2022-05-05T17:56:34Z) - Learning to generate shape from global-local spectra [0.0]
We build our method on top of recent advances on the so called shape-from-spectrum paradigm.
We consider the spectrum as a natural and ready to use representation to encode variability of the shapes.
Our results confirm the improvement of the proposed approach in comparison to existing and alternative methods.
arXiv Detail & Related papers (2021-08-04T16:39:56Z) - Editing Conditional Radiance Fields [40.685602081728554]
A neural radiance field (NeRF) is a scene model supporting high-quality view synthesis, optimized per scene.
In this paper, we explore enabling user editing of a category-level NeRF trained on a shape category.
We introduce a method for propagating coarse 2D user scribbles to the 3D space, to modify the color or shape of a local region.
arXiv Detail & Related papers (2021-05-13T17:59:48Z) - Training Data Generating Networks: Shape Reconstruction via Bi-level
Optimization [52.17872739634213]
We propose a novel 3d shape representation for 3d shape reconstruction from a single image.
We train a network to generate a training set which will be fed into another learning algorithm to define the shape.
arXiv Detail & Related papers (2020-10-16T09:52:13Z) - Learning Local Neighboring Structure for Robust 3D Shape Representation [143.15904669246697]
Representation learning for 3D meshes is important in many computer vision and graphics applications.
We propose a local structure-aware anisotropic convolutional operation (LSA-Conv)
Our model produces significant improvement in 3D shape reconstruction compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-04-21T13:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.