Interactive 3D Character Modeling from 2D Orthogonal Drawings with
Annotations
- URL: http://arxiv.org/abs/2201.11284v1
- Date: Thu, 27 Jan 2022 02:34:32 GMT
- Title: Interactive 3D Character Modeling from 2D Orthogonal Drawings with
Annotations
- Authors: Zhengyu Huang, Haoran Xie, Tsukasa Fukusato
- Abstract summary: We propose an interactive 3D character modeling approach from orthographic drawings based on 2D-space annotations.
The system builds partial correspondences between the input drawings and generates a base mesh with sweeping splines according to edge information in 2D images.
By repeating the 2D-space operations (i.e., revising and modifying the annotations), users can design a desired character model.
- Score: 9.83187539596669
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an interactive 3D character modeling approach from orthographic
drawings (e.g., front and side views) based on 2D-space annotations. First, the
system builds partial correspondences between the input drawings and generates
a base mesh with sweeping splines according to edge information in 2D images.
Next, users annotates the desired parts on the input drawings (e.g., the eyes
and mouth) by using two type of strokes, called addition and erosion, and the
system re-optimizes the shape of the base mesh. By repeating the 2D-space
operations (i.e., revising and modifying the annotations), users can design a
desired character model. To validate the efficiency and quality of our system,
we verified the generated results with state-of-the-art methods.
Related papers
- Alignment of 3D woodblock geometrical models and 2D orthographic projection image [0.0]
This paper proposes a unified image processing algorithm to address this issue.
The method includes determining the plane of the 3D character model, establishing a transformation matrix, and creating a parallel-projected depth map.
Experimental results highlight the importance of structure-based comparisons to optimize alignment for large-scale Han-Nom character datasets.
arXiv Detail & Related papers (2024-11-08T12:30:41Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - PlankAssembly: Robust 3D Reconstruction from Three Orthographic Views
with Learnt Shape Programs [24.09764733540401]
We develop a new method to automatically convert 2D line drawings from three orthographic views into 3D CAD models.
We leverage the attention mechanism in a Transformer-based sequence generation model to learn flexible mappings between the input and output.
Our method significantly outperforms existing ones when the inputs are noisy or incomplete.
arXiv Detail & Related papers (2023-08-10T17:59:34Z) - Deep-MDS Framework for Recovering the 3D Shape of 2D Landmarks from a
Single Image [8.368476827165114]
This paper proposes a framework to recover the 3D shape of 2D landmarks on a human face, in a single input image.
A deep neural network learns the pairwise dissimilarity among 2D landmarks, used by NMDS approach.
arXiv Detail & Related papers (2022-10-27T06:20:10Z) - ISS: Image as Stetting Stone for Text-Guided 3D Shape Generation [91.37036638939622]
This paper presents a new framework called Image as Stepping Stone (ISS) for the task by introducing 2D image as a stepping stone to connect the two modalities.
Our key contribution is a two-stage feature-space-alignment approach that maps CLIP features to shapes.
We formulate a text-guided shape stylization module to dress up the output shapes with novel textures.
arXiv Detail & Related papers (2022-09-09T06:54:21Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - Cross-Modal 3D Shape Generation and Manipulation [62.50628361920725]
We propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.
We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images.
arXiv Detail & Related papers (2022-07-24T19:22:57Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Neural Face Identification in a 2D Wireframe Projection of a Manifold
Object [8.697806983058035]
In computer-aided design (CAD) systems, 2D line drawings are commonly used to illustrate 3D object designs.
In this paper, we approach the classical problem of face identification from a novel data-driven point of view.
We adopt a variant of the popular Transformer model to predict the edges associated with the same face in a natural order.
arXiv Detail & Related papers (2022-03-08T17:47:51Z) - Neural Strokes: Stylized Line Drawing of 3D Shapes [36.88356061690497]
This paper introduces a model for producing stylized line drawings from 3D shapes.
The model takes a 3D shape and a viewpoint as input, and outputs a drawing with textured strokes.
arXiv Detail & Related papers (2021-10-08T05:40:57Z) - Joint Deep Multi-Graph Matching and 3D Geometry Learning from
Inhomogeneous 2D Image Collections [57.60094385551773]
We propose a trainable framework for learning a deformable 3D geometry model from inhomogeneous image collections.
We in addition obtain the underlying 3D geometry of the objects depicted in the 2D images.
arXiv Detail & Related papers (2021-03-31T17:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.