INST-Sculpt: Interactive Stroke-based Neural SDF Sculpting
- URL: http://arxiv.org/abs/2502.02891v1
- Date: Wed, 05 Feb 2025 05:25:15 GMT
- Title: INST-Sculpt: Interactive Stroke-based Neural SDF Sculpting
- Authors: Fizza Rubab, Yiying Tong,
- Abstract summary: We introduce a framework that enables interactive surface sculpting edits directly on neural implicit representations.
Our approach allows users to perform stroke-based modifications on the fly, ensuring intuitive shape manipulation without switching representations.
- Score: 4.398358164188111
- License:
- Abstract: Recent advances in implicit neural representations have made them a popular choice for modeling 3D geometry, achieving impressive results in tasks such as shape representation, reconstruction, and learning priors. However, directly editing these representations poses challenges due to the complex relationship between model weights and surface regions they influence. Among such editing tools, sculpting, which allows users to interactively carve or extrude the surface, is a valuable editing operation to the graphics and modeling community. While traditional mesh-based tools like ZBrush facilitate fast and intuitive edits, a comparable toolkit for sculpting neural SDFs is currently lacking. We introduce a framework that enables interactive surface sculpting edits directly on neural implicit representations. Unlike previous works limited to spot edits, our approach allows users to perform stroke-based modifications on the fly, ensuring intuitive shape manipulation without switching representations. By employing tubular neighborhoods to sample strokes and custom brush profiles, we achieve smooth deformations along user-defined curves, providing precise control over the sculpting process. Our method demonstrates that intricate and versatile edits can be made while preserving the smooth nature of implicit representations.
Related papers
- ShapeFusion: A 3D diffusion model for localized shape editing [37.82690898932135]
We propose an effective diffusion masking training strategy that, by design, facilitates localized manipulation of any shape region.
Compared to the current state-of-the-art our method leads to more interpretable shape manipulations than methods relying on latent code state.
arXiv Detail & Related papers (2024-03-28T18:50:19Z) - SERF: Fine-Grained Interactive 3D Segmentation and Editing with Radiance Fields [92.14328581392633]
We introduce a novel fine-grained interactive 3D segmentation and editing algorithm with radiance fields, which we refer to as SERF.
Our method entails creating a neural mesh representation by integrating multi-view algorithms with pre-trained 2D models.
Building upon this representation, we introduce a novel surface rendering technique that preserves local information and is robust to deformation.
arXiv Detail & Related papers (2023-12-26T02:50:42Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - 3D Neural Sculpting (3DNS): Editing Neural Signed Distance Functions [34.39282814876276]
In this work, we propose the first method for efficient interactive editing of signed distance functions expressed through neural networks.
Inspired by 3D sculpting software for meshes, we use a brush-based framework that is intuitive and can in the future be used by sculptors and digital artists.
arXiv Detail & Related papers (2022-09-28T10:05:16Z) - NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for
Geometry and Texture Editing [39.71252429542249]
We present a novel mesh-based representation by encoding the neural implicit field with disentangled geometry and texture codes on mesh vertices.
We develop several techniques including learnable sign indicators to magnify spatial distinguishability of mesh-based representation.
Experiments and editing examples on both real and synthetic data demonstrate the superiority of our method on representation quality and editing ability.
arXiv Detail & Related papers (2022-07-25T05:30:50Z) - Neural Parameterization for Dynamic Human Head Editing [26.071370285285465]
We present Neuralization (NeP), a hybrid representation that provides the advantages of both implicit and explicit methods.
NeP is capable of photo-realistic rendering while allowing fine-grained editing of the scene geometry and appearance.
The results show that the NeP achieves almost the same level of rendering accuracy while maintaining high editability.
arXiv Detail & Related papers (2022-07-01T05:25:52Z) - DualSDF: Semantic Shape Manipulation using a Two-Level Representation [54.62411904952258]
We propose DualSDF, a representation expressing shapes at two levels of granularity, one capturing fine details and the other representing an abstracted proxy shape.
Our two-level model gives rise to a new shape manipulation technique in which a user can interactively manipulate the coarse proxy shape and see the changes instantly mirrored in the high-resolution shape.
arXiv Detail & Related papers (2020-04-06T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.