Sketch2PQ: Freeform Planar Quadrilateral Mesh Design via a Single Sketch
- URL: http://arxiv.org/abs/2201.09367v1
- Date: Sun, 23 Jan 2022 21:09:59 GMT
- Title: Sketch2PQ: Freeform Planar Quadrilateral Mesh Design via a Single Sketch
- Authors: Zhi Deng, Yang Liu, Hao Pan, Wassim Jabi, Juyong Zhang, Bailin Deng
- Abstract summary: We present a novel sketch-based system to bridge the concept design and digital modeling of freeform roof-like shapes.
Our system allows the user to sketch the surface boundary and contour lines under axonometric projection.
We propose a deep neural network to infer in real-time the underlying surface shape along with a dense conjugate direction field.
- Score: 36.10997511325458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The freeform architectural modeling process often involves two important
stages: concept design and digital modeling. In the first stage, architects
usually sketch the overall 3D shape and the panel layout on a physical or
digital paper briefly. In the second stage, a digital 3D model is created using
the sketching as the reference. The digital model needs to incorporate
geometric requirements for its components, such as planarity of panels due to
consideration of construction costs, which can make the modeling process more
challenging. In this work, we present a novel sketch-based system to bridge the
concept design and digital modeling of freeform roof-like shapes represented as
planar quadrilateral (PQ) meshes. Our system allows the user to sketch the
surface boundary and contour lines under axonometric projection and supports
the sketching of occluded regions. In addition, the user can sketch feature
lines to provide directional guidance to the PQ mesh layout. Given the 2D
sketch input, we propose a deep neural network to infer in real-time the
underlying surface shape along with a dense conjugate direction field, both of
which are used to extract the final PQ mesh. To train and validate our network,
we generate a large synthetic dataset that mimics architect sketching of
freeform quadrilateral patches. The effectiveness and usability of our system
are demonstrated with quantitative and qualitative evaluation as well as user
studies.
Related papers
- Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - Sketch2Cloth: Sketch-based 3D Garment Generation with Unsigned Distance
Fields [12.013968508918634]
We propose Sketch2Cloth, a sketch-based 3D garment generation system using the unsigned distance fields from the user's sketch input.
Sketch2Cloth first estimates the unsigned distance function of the target 3D model from the sketch input, and extracts the mesh from the estimated field with Marching Cubes.
We also provide the model editing function to modify the generated mesh.
arXiv Detail & Related papers (2023-03-01T01:45:28Z) - ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing [46.778258706603005]
This paper studies the problem of learning the shape given in the form of point clouds by inverse sketch-and-extrude.
We present ExtrudeNet, an unsupervised end-to-end network for discovering sketch and extrude from point clouds.
arXiv Detail & Related papers (2022-09-30T17:58:11Z) - TreeSketchNet: From Sketch To 3D Tree Parameters Generation [4.234843176066354]
3D modeling of non-linear objects from stylized sketches is a challenge even for experts in computer graphics.
We propose a broker system that mediates between the modeler and the 3D modelling software.
arXiv Detail & Related papers (2022-07-25T16:08:05Z) - Interactive 3D Character Modeling from 2D Orthogonal Drawings with
Annotations [9.83187539596669]
We propose an interactive 3D character modeling approach from orthographic drawings based on 2D-space annotations.
The system builds partial correspondences between the input drawings and generates a base mesh with sweeping splines according to edge information in 2D images.
By repeating the 2D-space operations (i.e., revising and modifying the annotations), users can design a desired character model.
arXiv Detail & Related papers (2022-01-27T02:34:32Z) - SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design [40.821865912127635]
We propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads.
We use the advanced implicit-based shape inference methods, which have strong ability to handle the domain gap between freehand sketches and synthetic ones used for training.
We also contribute to a dataset of high-quality 3D animal heads, which are manually created by artists.
arXiv Detail & Related papers (2021-08-05T12:17:36Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.