SketchHairSalon: Deep Sketch-based Hair Image Synthesis
- URL: http://arxiv.org/abs/2109.07874v1
- Date: Thu, 16 Sep 2021 11:14:01 GMT
- Title: SketchHairSalon: Deep Sketch-based Hair Image Synthesis
- Authors: Chufeng Xiao, Deng Yu, Xiaoguang Han, Youyi Zheng, Hongbo Fu
- Abstract summary: We present a framework for generating realistic hair images directly from freehand sketches depicting desired hair structure and appearance.
Based on the trained networks and the two sketch completion strategies, we build an intuitive interface to allow even novice users to design visually pleasing hair images.
- Score: 36.79413744626908
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent deep generative models allow real-time generation of hair images from
sketch inputs. Existing solutions often require a user-provided binary mask to
specify a target hair shape. This not only costs users extra labor but also
fails to capture complicated hair boundaries. Those solutions usually encode
hair structures via orientation maps, which, however, are not very effective to
encode complex structures. We observe that colored hair sketches already
implicitly define target hair shapes as well as hair appearance and are more
flexible to depict hair structures than orientation maps. Based on these
observations, we present SketchHairSalon, a two-stage framework for generating
realistic hair images directly from freehand sketches depicting desired hair
structure and appearance. At the first stage, we train a network to predict a
hair matte from an input hair sketch, with an optional set of non-hair strokes.
At the second stage, another network is trained to synthesize the structure and
appearance of hair images from the input sketch and the generated matte. To
make the networks in the two stages aware of long-term dependency of strokes,
we apply self-attention modules to them. To train these networks, we present a
new dataset containing thousands of annotated hair sketch-image pairs and
corresponding hair mattes. Two efficient methods for sketch completion are
proposed to automatically complete repetitive braided parts and hair strokes,
respectively, thus reducing the workload of users. Based on the trained
networks and the two sketch completion strategies, we build an intuitive
interface to allow even novice users to design visually pleasing hair images
exhibiting various hair structures and appearance via freehand sketches. The
qualitative and quantitative evaluations show the advantages of the proposed
system over the existing or alternative solutions.
Related papers
- HairStep: Transfer Synthetic to Real Using Strand and Depth Maps for
Single-View 3D Hair Modeling [55.57803336895614]
We tackle the challenging problem of learning-based single-view 3D hair modeling.
We first propose a novel intermediate representation, termed as HairStep, which consists of a strand map and a depth map.
It is found that HairStep not only provides sufficient information for accurate 3D hair modeling, but also is feasible to be inferred from real images.
arXiv Detail & Related papers (2023-03-05T15:28:13Z) - HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment
and Semantic-Region-Aware Inpainting [26.688276902813495]
We propose a novel framework for pose-invariant hairstyle transfer, HairFIT.
Our model consists of two stages: 1) flow-based hair alignment and 2) hair synthesis.
Our SIM estimator divides the occluded regions in the source image into different semantic regions to reflect their distinct features during the inpainting.
arXiv Detail & Related papers (2022-06-17T06:55:20Z) - Hair Color Digitization through Imaging and Deep Inverse Graphics [8.605763075773746]
We introduce a novel method for hair color digitization based on inverse graphics and deep neural networks.
Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance.
Our method is based on the combination of a controlled imaging device, a path-tracing rendering, and an inverse graphics model based on self-supervised machine learning.
arXiv Detail & Related papers (2022-02-08T08:57:04Z) - HairCLIP: Design Your Hair by Text and Reference Image [100.85116679883724]
This paper proposes a new hair editing interaction mode, which enables manipulating hair attributes individually or jointly.
We encode the image and text conditions in a shared embedding space and propose a unified hair editing framework.
With the carefully designed network structures and loss functions, our framework can perform high-quality hair editing.
arXiv Detail & Related papers (2021-12-09T18:59:58Z) - MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait
Editing [122.82964863607938]
MichiGAN is a novel conditional image generation method for interactive portrait hair manipulation.
We provide user control over every major hair visual factor, including shape, structure, appearance, and background.
We also build an interactive portrait hair editing system that enables straightforward manipulation of hair by projecting intuitive and high-level user inputs.
arXiv Detail & Related papers (2020-10-30T17:59:10Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z) - Intuitive, Interactive Beard and Hair Synthesis with Generative Models [38.93415643177721]
We present an interactive approach to synthesizing realistic variations in facial hair in images.
We employ a neural network pipeline that synthesizes realistic and detailed images of facial hair directly in the target image in under one second.
We show compelling interactive editing results with a prototype user interface that allows novice users to progressively refine the generated image to match their desired hairstyle.
arXiv Detail & Related papers (2020-04-15T01:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.