Quality Guided Sketch-to-Photo Image Synthesis
- URL: http://arxiv.org/abs/2005.02133v1
- Date: Mon, 20 Apr 2020 16:00:01 GMT
- Title: Quality Guided Sketch-to-Photo Image Synthesis
- Authors: Uche Osahor, Hadi Kazemi, Ali Dabouei, Nasser Nasrabadi
- Abstract summary: We propose a generative adversarial network that synthesizes a single sketch into multiple synthetic images with unique attributes like hair color, sex, etc.
Our approach is aimed at improving the visual appeal of the synthesised images while incorporating multiple attribute assignment to the generator without compromising the identity of the synthesised image.
- Score: 12.617078020344618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Facial sketches drawn by artists are widely used for visual identification
applications and mostly by law enforcement agencies, but the quality of these
sketches depend on the ability of the artist to clearly replicate all the key
facial features that could aid in capturing the true identity of a subject.
Recent works have attempted to synthesize these sketches into plausible visual
images to improve visual recognition and identification. However, synthesizing
photo-realistic images from sketches proves to be an even more challenging
task, especially for sensitive applications such as suspect identification. In
this work, we propose a novel approach that adopts a generative adversarial
network that synthesizes a single sketch into multiple synthetic images with
unique attributes like hair color, sex, etc. We incorporate a hybrid
discriminator which performs attribute classification of multiple target
attributes, a quality guided encoder that minimizes the perceptual
dissimilarity of the latent space embedding of the synthesized and real image
at different layers in the network and an identity preserving network that
maintains the identity of the synthesised image throughout the training
process. Our approach is aimed at improving the visual appeal of the
synthesised images while incorporating multiple attribute assignment to the
generator without compromising the identity of the synthesised image. We
synthesised sketches using XDOG filter for the CelebA, WVU Multi-modal and
CelebA-HQ datasets and from an auxiliary generator trained on sketches from
CUHK, IIT-D and FERET datasets. Our results are impressive compared to current
state of the art.
Related papers
- Personalized Face Inpainting with Diffusion Models by Parallel Visual
Attention [55.33017432880408]
This paper proposes the use of Parallel Visual Attention (PVA) in conjunction with diffusion models to improve inpainting results.
We train the added attention modules and identity encoder on CelebAHQ-IDI, a dataset proposed for identity-preserving face inpainting.
Experiments demonstrate that PVA attains unparalleled identity resemblance in both face inpainting and face inpainting with language guidance tasks.
arXiv Detail & Related papers (2023-12-06T15:39:03Z) - FaceStudio: Put Your Face Everywhere in Seconds [23.381791316305332]
Identity-preserving image synthesis seeks to maintain a subject's identity while adding a personalized, stylistic touch.
Traditional methods, such as Textual Inversion and DreamBooth, have made strides in custom image creation.
Our research introduces a novel approach to identity-preserving synthesis, with a particular focus on human images.
arXiv Detail & Related papers (2023-12-05T11:02:45Z) - SynFace: Face Recognition with Synthetic Data [83.15838126703719]
We devise the SynFace with identity mixup (IM) and domain mixup (DM) to mitigate the performance gap.
We also perform a systematically empirical analysis on synthetic face images to provide some insights on how to effectively utilize synthetic data for face recognition.
arXiv Detail & Related papers (2021-08-18T03:41:54Z) - Multimodal Face Synthesis from Visual Attributes [85.87796260802223]
We propose a novel generative adversarial network that simultaneously synthesizes identity preserving multimodal face images.
multimodal stretch-in modules are introduced in the discriminator which discriminates between real and fake images.
arXiv Detail & Related papers (2021-04-09T13:47:23Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - An Assessment of GANs for Identity-related Applications [3.088045900462408]
We apply a state of the art biometric network to various datasets of synthetic images and perform a thorough assessment of their identity-related characteristics.
We conclude that GANs can indeed be used to generate new, imagined identities meaning applications such as anonymisation of image sets and augmentation of training datasets with distractor images are viable applications.
arXiv Detail & Related papers (2020-12-18T23:41:13Z) - Self-Supervised Sketch-to-Image Synthesis [21.40315235087551]
We study the exemplar-based sketch-to-image (s2i) synthesis task in a self-supervised learning manner.
We first propose an unsupervised method to efficiently synthesize line-sketches for general RGB-only datasets.
We then present a self-supervised Auto-Encoder (AE) to decouple the content/style features from sketches and RGB-images, and synthesize images that are both content-faithful to the sketches and style-consistent to the RGB-images.
arXiv Detail & Related papers (2020-12-16T22:14:06Z) - Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image
Retrieval [147.24102408745247]
We study a further trait of sketches that has been overlooked to date, that is, they are hierarchical in terms of the levels of detail.
In this paper, we design a novel network that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels.
arXiv Detail & Related papers (2020-07-29T20:50:25Z) - Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-based
Image Retrieval [55.29233996427243]
Low-shot sketch-based image retrieval is an emerging task in computer vision.
In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks.
For solving these tasks, we propose a semantically aligned cycle-consistent generative adversarial network (SEM-PCYC)
Our results demonstrate a significant boost in any-shot performance over the state-of-the-art on the extended version of the Sketchy, TU-Berlin and QuickDraw datasets.
arXiv Detail & Related papers (2020-06-20T22:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.