SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling
- URL: http://arxiv.org/abs/2306.06088v2
- Date: Wed, 21 Feb 2024 13:35:34 GMT
- Title: SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling
- Authors: Alexandre Binninger, Amir Hertz, Olga Sorkine-Hornung, Daniel
Cohen-Or, Raja Giryes
- Abstract summary: We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
- Score: 124.3266213819203
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present SENS, a novel method for generating and editing 3D models from
hand-drawn sketches, including those of abstract nature. Our method allows
users to quickly and easily sketch a shape, and then maps the sketch into the
latent space of a part-aware neural implicit shape architecture. SENS analyzes
the sketch and encodes its parts into ViT patch encoding, subsequently feeding
them into a transformer decoder that converts them to shape embeddings suitable
for editing 3D neural implicit shapes. SENS provides intuitive sketch-based
generation and editing, and also succeeds in capturing the intent of the user's
sketch to generate a variety of novel and expressive 3D shapes, even from
abstract and imprecise sketches. Additionally, SENS supports refinement via
part reconstruction, allowing for nuanced adjustments and artifact removal. It
also offers part-based modeling capabilities, enabling the combination of
features from multiple sketches to create more complex and customized 3D
shapes. We demonstrate the effectiveness of our model compared to the
state-of-the-art using objective metric evaluation criteria and a user study,
both indicating strong performance on sketches with a medium level of
abstraction. Furthermore, we showcase our method's intuitive sketch-based shape
editing capabilities, and validate it through a usability study.
Related papers
- Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation [13.47191379827792]
We investigate how large pre-trained models can be used to generate 3D shapes from sketches.
We find that conditioning a 3D generative model on the features of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time.
This suggests that the large pre-trained vision model features carry semantic signals that are resilient to domain shifts.
arXiv Detail & Related papers (2023-07-08T00:45:01Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Sketch2Model: View-Aware 3D Modeling from Single Free-Hand Sketches [4.781615891172263]
We investigate the problem of generating 3D meshes from single free-hand sketches, aiming at fast 3D modeling for novice users.
We address the importance of viewpoint specification for overcoming ambiguities, and propose a novel view-aware generation approach.
arXiv Detail & Related papers (2021-05-14T06:27:48Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.