Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation
- URL: http://arxiv.org/abs/2309.00216v1
- Date: Fri, 1 Sep 2023 02:27:05 GMT
- Title: Human-Inspired Facial Sketch Synthesis with Dynamic Adaptation
- Authors: Fei Gao, Yifan Zhu, Chang Jiang, Nannan Wang
- Abstract summary: Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a given facial photo.
In this paper, we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method.
We show that HIDA can generate high-quality sketches in multiple styles, and significantly outperforms previous methods.
- Score: 25.293899668984018
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Facial sketch synthesis (FSS) aims to generate a vivid sketch portrait from a
given facial photo. Existing FSS methods merely rely on 2D representations of
facial semantic or appearance. However, professional human artists usually use
outlines or shadings to covey 3D geometry. Thus facial 3D geometry (e.g. depth
map) is extremely important for FSS. Besides, different artists may use diverse
drawing techniques and create multiple styles of sketches; but the style is
globally consistent in a sketch. Inspired by such observations, in this paper,
we propose a novel Human-Inspired Dynamic Adaptation (HIDA) method. Specially,
we propose to dynamically modulate neuron activations based on a joint
consideration of both facial 3D geometry and 2D appearance, as well as globally
consistent style control. Besides, we use deformable convolutions at
coarse-scales to align deep features, for generating abstract and distinct
outlines. Experiments show that HIDA can generate high-quality sketches in
multiple styles, and significantly outperforms previous methods, over a large
range of challenging faces. Besides, HIDA allows precise style control of the
synthesized sketch, and generalizes well to natural scenes and other artistic
styles. Our code and results have been released online at:
https://github.com/AiArt-HDU/HIDA.
Related papers
- S2TD-Face: Reconstruct a Detailed 3D Face with Controllable Texture from a Single Sketch [29.068915907911432]
3D textured face reconstruction from sketches applicable in many scenarios such as animation, 3D avatars, artistic design, missing people search, etc.
This paper proposes a novel method for reconstructing controllable textured and detailed 3D faces from sketches, named S2TD-Face.
arXiv Detail & Related papers (2024-08-02T12:16:07Z) - Sketch2Human: Deep Human Generation with Disentangled Geometry and Appearance Control [27.23770287587972]
This work presents Sketch2Human, the first system for controllable full-body human image generation guided by a semantic sketch.
We present a sketch encoder trained with a large synthetic dataset sampled from StyleGAN-Human's latent space.
Although our method is trained with synthetic data, it can handle hand-drawn sketches as well.
arXiv Detail & Related papers (2024-04-24T14:24:57Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - 3D Cartoon Face Generation with Controllable Expressions from a Single
GAN Image [142.047662926209]
We generate 3D cartoon face shapes from single 2D GAN generated human faces.
We manipulate latent codes to generate images with different poses and lighting, such that we can reconstruct the 3D cartoon face shapes.
arXiv Detail & Related papers (2022-07-29T01:06:21Z) - DeepPortraitDrawing: Generating Human Body Images from Freehand Sketches [75.4318318890065]
We present DeepDrawing, a framework for converting roughly drawn sketches to realistic human body images.
To encode complicated body shapes under various poses, we take a local-to-global approach.
Our method produces more realistic images than the state-of-the-art sketch-to-image synthesis techniques.
arXiv Detail & Related papers (2022-05-04T14:02:45Z) - SimpModeling: Sketching Implicit Field to Guide Mesh Modeling for 3D
Animalmorphic Head Design [40.821865912127635]
We propose SimpModeling, a novel sketch-based system for helping users, especially amateur users, easily model 3D animalmorphic heads.
We use the advanced implicit-based shape inference methods, which have strong ability to handle the domain gap between freehand sketches and synthetic ones used for training.
We also contribute to a dataset of high-quality 3D animal heads, which are manually created by artists.
arXiv Detail & Related papers (2021-08-05T12:17:36Z) - Exemplar-Based 3D Portrait Stylization [23.585334925548064]
We present the first framework for one-shot 3D portrait style transfer.
It can generate 3D face models with both the geometry exaggerated and the texture stylized.
Our method achieves robustly good results on different artistic styles and outperforms existing methods.
arXiv Detail & Related papers (2021-04-29T17:59:54Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.