Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation
- URL: http://arxiv.org/abs/2302.06857v2
- Date: Sun, 1 Oct 2023 08:08:02 GMT
- Title: Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation
- Authors: Yasheng Sun, Qianyi Wu, Hang Zhou, Kaisiyuan Wang, Tianshu Hu,
Chen-Chieh Liao, Shio Miyafuji, Ziwei Liu, Hideki Koike
- Abstract summary: Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
- Score: 51.64832538714455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Creating the photo-realistic version of people sketched portraits is useful
to various entertainment purposes. Existing studies only generate portraits in
the 2D plane with fixed views, making the results less vivid. In this paper, we
present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the
possibility of creating Stereoscopic 3D-aware portraits from simple contour
sketches by involving 3D generative models. Our key insight is to design
sketch-aware constraints that can fully exploit the prior knowledge of a
tri-plane-based 3D-aware generative model. Specifically, our designed
region-aware volume rendering strategy and global consistency constraint
further enhance detail correspondences during sketch encoding. Moreover, in
order to facilitate the usage of layman users, we propose a Contour-to-Sketch
module with vector quantized representations, so that easily drawn contours can
directly guide the generation of 3D portraits. Extensive comparisons show that
our method generates high-quality results that match the sketch. Our usability
study verifies that our system is greatly preferred by user.
Related papers
- Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Rendering [17.918603435615335]
3D sketches are widely used for visually representing the 3D shape and structure of objects or scenes.
We propose Diff3DS, a novel differentiable framework for generating view-consistent 3D sketch.
Our framework bridges the domains of 3D sketch and customized image, achieving end-toend optimization of 3D sketch.
arXiv Detail & Related papers (2024-05-24T07:48:14Z) - Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - Control3D: Towards Controllable Text-to-3D Generation [107.81136630589263]
We present a text-to-3D generation conditioning on the additional hand-drawn sketch, namely Control3D.
A 2D conditioned diffusion model (ControlNet) is remoulded to guide the learning of 3D scene parameterized as NeRF.
We exploit a pre-trained differentiable photo-to-sketch model to directly estimate the sketch of the rendered image over synthetic 3D scene.
arXiv Detail & Related papers (2023-11-09T15:50:32Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - Freestyle 3D-Aware Portrait Synthesis Based on Compositional Generative
Priors [12.663585627797863]
We propose a novel text-driven 3D-aware portrait synthesis framework.
Specifically, for a given portrait style prompt, we first composite two generative priors, a 3D-aware GAN generator and a text-guided image editor.
Then we map the special style domain of this set to our proposed 3D latent feature generator and obtain a 3D representation containing the given style information.
arXiv Detail & Related papers (2023-06-27T12:23:04Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - SingleSketch2Mesh : Generating 3D Mesh model from Sketch [1.6973426830397942]
Current methods to generate 3D models from sketches are either manual or tightly coupled with 3D modeling platforms.
We propose a novel AI based ensemble approach, SingleSketch2Mesh, for generating 3D models from hand-drawn sketches.
arXiv Detail & Related papers (2022-03-07T06:30:36Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - 3D Shape Reconstruction from Free-Hand Sketches [42.15888734492648]
Despite great progress achieved in 3D reconstruction from distortion-free line drawings, little effort has been made to reconstruct 3D shapes from free-hand sketches.
We aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games.
A major challenge for free-hand sketch 3D reconstruction comes from the insufficient training data and free-hand sketch diversity.
arXiv Detail & Related papers (2020-06-17T07:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.