3D Neural Sculpting (3DNS): Editing Neural Signed Distance Functions
- URL: http://arxiv.org/abs/2209.13971v1
- Date: Wed, 28 Sep 2022 10:05:16 GMT
- Title: 3D Neural Sculpting (3DNS): Editing Neural Signed Distance Functions
- Authors: Petros Tzathas, Petros Maragos, Anastasios Roussos
- Abstract summary: In this work, we propose the first method for efficient interactive editing of signed distance functions expressed through neural networks.
Inspired by 3D sculpting software for meshes, we use a brush-based framework that is intuitive and can in the future be used by sculptors and digital artists.
- Score: 34.39282814876276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, implicit surface representations through neural networks
that encode the signed distance have gained popularity and have achieved
state-of-the-art results in various tasks (e.g. shape representation, shape
reconstruction, and learning shape priors). However, in contrast to
conventional shape representations such as polygon meshes, the implicit
representations cannot be easily edited and existing works that attempt to
address this problem are extremely limited. In this work, we propose the first
method for efficient interactive editing of signed distance functions expressed
through neural networks, allowing free-form editing. Inspired by 3D sculpting
software for meshes, we use a brush-based framework that is intuitive and can
in the future be used by sculptors and digital artists. In order to localize
the desired surface deformations, we regulate the network by using a copy of it
to sample the previously expressed surface. We introduce a novel framework for
simulating sculpting-style surface edits, in conjunction with interactive
surface sampling and efficient adaptation of network weights. We qualitatively
and quantitatively evaluate our method in various different 3D objects and
under many different edits. The reported results clearly show that our method
yields high accuracy, in terms of achieving the desired edits, while at the
same time preserving the geometry outside the interaction areas.
Related papers
- CNS-Edit: 3D Shape Editing via Coupled Neural Shape Optimization [56.47175002368553]
This paper introduces a new approach based on a coupled representation and a neural volume optimization to implicitly perform 3D shape editing in latent space.
First, we design the coupled neural shape representation for supporting 3D shape editing.
Second, we formulate the coupled neural shape optimization procedure to co-optimize the two coupled components in the representation subject to the editing operation.
arXiv Detail & Related papers (2024-02-04T01:52:56Z) - SERF: Fine-Grained Interactive 3D Segmentation and Editing with Radiance Fields [92.14328581392633]
We introduce a novel fine-grained interactive 3D segmentation and editing algorithm with radiance fields, which we refer to as SERF.
Our method entails creating a neural mesh representation by integrating multi-view algorithms with pre-trained 2D models.
Building upon this representation, we introduce a novel surface rendering technique that preserves local information and is robust to deformation.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - Neural Impostor: Editing Neural Radiance Fields with Explicit Shape
Manipulation [49.852533321916844]
We introduce Neural Impostor, a hybrid representation incorporating an explicit tetrahedral mesh alongside a multigrid implicit field.
Our framework bridges the explicit shape manipulation and the geometric editing of implicit fields by utilizing multigrid barycentric coordinate encoding.
We show the robustness and adaptability of our system through diverse examples and experiments, including the editing of both synthetic objects and real captured data.
arXiv Detail & Related papers (2023-10-09T04:07:00Z) - 3Deformer: A Common Framework for Image-Guided Mesh Deformation [27.732389685912214]
Given a source 3D mesh with semantic materials, and a user-specified semantic image, 3Deformer can accurately edit the source mesh.
Our 3Deformer is able to produce impressive results and reaches the state-of-the-art level.
arXiv Detail & Related papers (2023-07-19T10:44:44Z) - Learning Locally Editable Virtual Humans [37.95173373011365]
We propose a novel hybrid representation and end-to-end trainable network architecture to model fully editable neural avatars.
At the core of our work lies a representation that combines the modeling power of neural fields with the ease of use and inherent 3D consistency of skinned meshes.
Our method generates diverse detailed avatars and achieves better model fitting performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-04-28T23:06:17Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Learning Neural Implicit Representations with Surface Signal
Parameterizations [14.835882967340968]
We present a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data.
Our model remains compatible with existing mesh-based digital content with appearance data.
arXiv Detail & Related papers (2022-11-01T15:10:58Z) - NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for
Geometry and Texture Editing [39.71252429542249]
We present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices.
We develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation.
Experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability.
arXiv Detail & Related papers (2022-07-25T05:30:50Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.