SingleSketch2Mesh : Generating 3D Mesh model from Sketch
- URL: http://arxiv.org/abs/2203.03157v2
- Date: Thu, 10 Mar 2022 07:15:13 GMT
- Title: SingleSketch2Mesh : Generating 3D Mesh model from Sketch
- Authors: Nitish Bhardwaj, Dhornala Bharadwaj, Alpana Dubey
- Abstract summary: Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms.
We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn sketches.
- Score: 1.6973426830397942
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Sketching is an important activity in any design process. Designers and
stakeholders share their ideas through hand-drawn sketches. These sketches are
further used to create 3D models. Current methods to generate 3D models from
sketches are either manual or tightly coupled with 3D modeling platforms.
Therefore, it requires users to have an experience of sketching on such
platform. Moreover, most of the existing approaches are based on geometric
manipulation and thus cannot be generalized. We propose a novel AI based
ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn
sketches. Our approach is based on Generative Networks and Encoder-Decoder
Architecture to generate 3D mesh model from a hand-drawn sketch. We evaluate
our solution with existing solutions. Our approach outperforms existing
approaches on both - quantitative and qualitative evaluation criteria.
Related papers
- Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - 3D VR Sketch Guided 3D Shape Prototyping and Exploration [108.6809158245037]
We propose a 3D shape generation network that takes a 3D VR sketch as a condition.
We assume that sketches are created by novices without art training.
Our method creates multiple 3D shapes that align with the original sketch's structure.
arXiv Detail & Related papers (2023-06-19T10:27:24Z) - Sketch2Cloth: Sketch-based 3D Garment Generation with Unsigned Distance
Fields [12.013968508918634]
We propose Sketch2Cloth, a sketch-based 3D garment generation system using the unsigned distance fields from the user's sketch input.
Sketch2Cloth first estimates the unsigned distance function of the target 3D model from the sketch input, and extracts the mesh from the estimated field with Marching Cubes.
We also provide the model editing function to modify the generated mesh.
arXiv Detail & Related papers (2023-03-01T01:45:28Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design [40.821865912127635]
We propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads.
We use the advanced implicit-based shape inference methods, which have strong ability to handle the domain gap between freehand sketches and synthetic ones used for training.
We also contribute to a dataset of high-quality 3D animal heads, which are manually created by artists.
arXiv Detail & Related papers (2021-08-05T12:17:36Z) - Sketch2Model: View-Aware 3D Modeling from Single Free-Hand Sketches [4.781615891172263]
We investigate the problem of generating 3D meshes from single free-hand sketches, aiming at fast 3D modeling for novice users.
We address the importance of viewpoint specification for overcoming ambiguities, and propose a novel view-aware generation approach.
arXiv Detail & Related papers (2021-05-14T06:27:48Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - 3D Shape Reconstruction from Free-Hand Sketches [42.15888734492648]
Despite great progress achieved in 3D reconstruction from distortion-free line drawings, little effort has been made to reconstruct 3D shapes from free-hand sketches.
We aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games.
A major challenge for free-hand sketch 3D reconstruction comes from the insufficient training data and free-hand sketch diversity.
arXiv Detail & Related papers (2020-06-17T07:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.