Semantics-Preserving Sketch Embedding for Face Generation
- URL: http://arxiv.org/abs/2211.13015v1
- Date: Wed, 23 Nov 2022 15:14:49 GMT
- Title: Semantics-Preserving Sketch Embedding for Face Generation
- Authors: Binxin Yang, Xuejin Chen, Chaoqun Wang, Chi Zhang, Zihan Chen and
Xiaoyan Sun
- Abstract summary: We introduce a novel W-W+ encoder architecture to take advantage of the high expressive power of W+ space.
We also introduce an explicit intermediate representation for sketch semantic embedding.
A novel sketch semantic interpretation approach is designed to automatically extract semantics from vectorized sketches.
- Score: 26.15479367792076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With recent advances in image-to-image translation tasks, remarkable progress
has been witnessed in generating face images from sketches. However, existing
methods frequently fail to generate images with details that are semantically
and geometrically consistent with the input sketch, especially when various
decoration strokes are drawn. To address this issue, we introduce a novel W-W+
encoder architecture to take advantage of the high expressive power of W+ space
and semantic controllability of W space. We introduce an explicit intermediate
representation for sketch semantic embedding. With a semantic feature matching
loss for effective semantic supervision, our sketch embedding precisely conveys
the semantics in the input sketches to the synthesized images. Moreover, a
novel sketch semantic interpretation approach is designed to automatically
extract semantics from vectorized sketches. We conduct extensive experiments on
both synthesized sketches and hand-drawn sketches, and the results demonstrate
the superiority of our method over existing approaches on both
semantics-preserving and generalization ability.
Related papers
- Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - Text-Guided Scene Sketch-to-Photo Synthesis [5.431298869139175]
We propose a method for scene-level sketch-to-photo synthesis with text guidance.
To train our model, we use self-supervised learning from a set of photographs.
Experiments show that the proposed method translates original sketch images that are not extracted from color images into photos with compelling visual quality.
arXiv Detail & Related papers (2023-02-14T08:13:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image
Retrieval [147.24102408745247]
We study a further trait of sketches that has been overlooked to date, that is, they are hierarchical in terms of the levels of detail.
In this paper, we design a novel network that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels.
arXiv Detail & Related papers (2020-07-29T20:50:25Z) - On Learning Semantic Representations for Million-Scale Free-Hand
Sketches [146.52892067335128]
We study learning semantic representations for million-scale free-hand sketches.
We propose a dual-branch CNNRNN network architecture to represent sketches.
We explore learning the sketch-oriented semantic representations in hashing retrieval and zero-shot recognition.
arXiv Detail & Related papers (2020-07-07T15:23:22Z) - Deep Generation of Face Images from Sketches [36.146494762987146]
Deep image-to-image translation techniques allow fast generation of face images from freehand sketches.
Existing solutions tend to overfit to sketches, thus requiring professional sketches or even edge maps as input.
We propose to implicitly model the shape space of plausible face images and synthesize a face image in this space to approximate an input sketch.
Our method essentially uses input sketches as soft constraints and is thus able to produce high-quality face images even from rough and/or incomplete sketches.
arXiv Detail & Related papers (2020-06-01T16:20:23Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.