Sketch-based Normal Map Generation with Geometric Sampling
- URL: http://arxiv.org/abs/2104.11554v1
- Date: Fri, 23 Apr 2021 12:30:22 GMT
- Title: Sketch-based Normal Map Generation with Geometric Sampling
- Authors: Yi He, Haoran Xie, Chao Zhang, Xi Yang, Kazunori Miyata
- Abstract summary: A designer may benefit from the auto-generation of high quality and accurate normal maps from freehand sketches in 3D content creation.
This paper proposes a deep generative model for generating normal maps from users sketch with geometric sampling.
- Score: 14.323902770651289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Normal map is an important and efficient way to represent complex 3D models.
A designer may benefit from the auto-generation of high quality and accurate
normal maps from freehand sketches in 3D content creation. This paper proposes
a deep generative model for generating normal maps from users sketch with
geometric sampling. Our generative model is based on Conditional Generative
Adversarial Network with the curvature-sensitive points sampling of conditional
masks. This sampling process can help eliminate the ambiguity of generation
results as network input. In addition, we adopted a U-Net structure
discriminator to help the generator be better trained. It is verified that the
proposed framework can generate more accurate normal maps.
Related papers
- GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance [66.40153183581894]
We introduce a generic and scalable mesh generation framework PivotMesh.
PivotMesh makes an initial attempt to extend the native mesh generation to large-scale datasets.
We show that PivotMesh can generate compact and sharp 3D meshes across various categories.
arXiv Detail & Related papers (2024-05-27T07:13:13Z) - Using Intermediate Forward Iterates for Intermediate Generator
Optimization [14.987013151525368]
Intermediate Generator Optimization can be incorporated into any standard autoencoder pipeline for the generative task.
We show applications of the IGO on two dense predictive tasks viz., image extrapolation, and point cloud denoising.
arXiv Detail & Related papers (2023-02-05T08:46:15Z) - SketchSampler: Sketch-based 3D Reconstruction via View-dependent Depth
Sampling [75.957103837167]
Reconstructing a 3D shape based on a single sketch image is challenging due to the large domain gap between a sparse, irregular sketch and a regular, dense 3D shape.
Existing works try to employ the global feature extracted from sketch to directly predict the 3D coordinates, but they usually suffer from losing fine details that are not faithful to the input sketch.
arXiv Detail & Related papers (2022-08-14T16:37:51Z) - 3DILG: Irregular Latent Grids for 3D Generative Modeling [44.16807313707137]
We propose a new representation for encoding 3D shapes as neural fields.
The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation.
arXiv Detail & Related papers (2022-05-27T11:29:52Z) - Gaussian map predictions for 3D surface feature localisation and
counting [5.634825161148484]
We propose to employ a Gaussian map representation to estimate precise location and count of 3D surface features.
We apply this method to the 3D spheroidal class of objects which can be projected into 2D shape representation.
We demonstrate a practical use of this technique for counting strawberry achenes which is used as a fruit quality measure in phenotyping applications.
arXiv Detail & Related papers (2021-12-07T14:43:14Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - PUGeo-Net: A Geometry-centric Network for 3D Point Cloud Upsampling [103.09504572409449]
We propose a novel deep neural network based method, called PUGeo-Net, to generate uniform dense point clouds.
Thanks to its geometry-centric nature, PUGeo-Net works well for both CAD models with sharp features and scanned models with rich geometric details.
arXiv Detail & Related papers (2020-02-24T14:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.