AniFaceDrawing: Anime Portrait Exploration during Your Sketching
- URL: http://arxiv.org/abs/2306.07476v1
- Date: Tue, 13 Jun 2023 00:43:47 GMT
- Title: AniFaceDrawing: Anime Portrait Exploration during Your Sketching
- Authors: Zhengyu Huang, Haoran Xie, Tsukasa Fukusato, Kazunori Miyata
- Abstract summary: This paper focuses on how artificial intelligence can be used to assist users in the creation of anime portraits.
The input is a sequence of incomplete freehand sketches that are gradually refined stroke by stroke.
The output is a sequence of high-quality anime portraits that correspond to the input sketches as guidance.
- Score: 9.933240729830151
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we focus on how artificial intelligence (AI) can be used to
assist users in the creation of anime portraits, that is, converting rough
sketches into anime portraits during their sketching process. The input is a
sequence of incomplete freehand sketches that are gradually refined stroke by
stroke, while the output is a sequence of high-quality anime portraits that
correspond to the input sketches as guidance. Although recent GANs can generate
high quality images, it is a challenging problem to maintain the high quality
of generated images from sketches with a low degree of completion due to
ill-posed problems in conditional image generation. Even with the latest
sketch-to-image (S2I) technology, it is still difficult to create high-quality
images from incomplete rough sketches for anime portraits since anime style
tend to be more abstract than in realistic style. To address this issue, we
adopt a latent space exploration of StyleGAN with a two-stage training
strategy. We consider the input strokes of a freehand sketch to correspond to
edge information-related attributes in the latent structural code of StyleGAN,
and term the matching between strokes and these attributes stroke-level
disentanglement. In the first stage, we trained an image encoder with the
pre-trained StyleGAN model as a teacher encoder. In the second stage, we
simulated the drawing process of the generated images without any additional
data (labels) and trained the sketch encoder for incomplete progressive
sketches to generate high-quality portrait images with feature alignment to the
disentangled representations in the teacher encoder. We verified the proposed
progressive S2I system with both qualitative and quantitative evaluations and
achieved high-quality anime portraits from incomplete progressive sketches. Our
user study proved its effectiveness in art creation assistance for the anime
style.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - Bridging the Gap: Fine-to-Coarse Sketch Interpolation Network for
High-Quality Animation Sketch Inbetweening [62.33071223229861]
Fine-to-Co-arse Interpolation Network (FC-SIN) is proposed to overcome sketch inbetweening issues.
FC-SIN incorporates multi-level guidance that formulates region-level correspondence, sketch-level correspondence and pixel-level dynamics.
We constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles.
arXiv Detail & Related papers (2023-08-25T09:51:03Z) - Towards Interactive Image Inpainting via Sketch Refinement [13.34066589008464]
We propose a two-stage image inpainting method termed SketchRefiner.
In the first stage, we propose using a cross-correlation loss function to robustly calibrate and refine the user-provided sketches.
In the second stage, we learn to extract informative features from the abstracted sketches in the feature space and modulate the inpainting process.
arXiv Detail & Related papers (2023-06-01T07:15:54Z) - AgileGAN3D: Few-Shot 3D Portrait Stylization by Augmented Transfer
Learning [80.67196184480754]
We propose a novel framework emphAgileGAN3D that can produce 3D artistically appealing portraits with detailed geometry.
New stylization can be obtained with just a few (around 20) unpaired 2D exemplars.
Our pipeline demonstrates strong capability in turning user photos into a diverse range of 3D artistic portraits.
arXiv Detail & Related papers (2023-03-24T23:04:20Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in
Context [112.07988211268612]
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO.
Our dataset comprises 10,000 freehand scene vector sketches with per point space-time information by 100 non-expert individuals.
We study for the first time the problem of the fine-grained image retrieval from freehand scene sketches and sketch captions.
arXiv Detail & Related papers (2022-03-04T03:00:51Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Making Robots Draw A Vivid Portrait In Two Minutes [11.148458054454407]
We present a drawing robot, which can automatically transfer a facial picture to a vivid portrait, and then draw it on paper within two minutes averagely.
At the heart of our system is a novel portrait synthesis algorithm based on deep learning.
The whole portrait drawing robotic system is named AiSketcher.
arXiv Detail & Related papers (2020-05-12T03:02:24Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.