Effect of Instance Normalization on Fine-Grained Control for
Sketch-Based Face Image Generation
- URL: http://arxiv.org/abs/2207.08072v1
- Date: Sun, 17 Jul 2022 04:05:17 GMT
- Title: Effect of Instance Normalization on Fine-Grained Control for
Sketch-Based Face Image Generation
- Authors: Zhihua Cheng, Xuejin Chen
- Abstract summary: We investigate the effect of instance normalization on generating photorealistic face images from hand-drawn sketches.
Based on the visual analysis, we modify the instance normalization layers in the baseline image translation model.
We elaborate a new set of hand-drawn sketches with 11 categories of specially designed changes and conduct extensive experimental analysis.
- Score: 17.31312721810532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketching is an intuitive and effective way for content creation. While
significant progress has been made for photorealistic image generation by using
generative adversarial networks, it remains challenging to take a fine-grained
control on synthetic content. The instance normalization layer, which is widely
adopted in existing image translation networks, washes away details in the
input sketch and leads to loss of precise control on the desired shape of the
generated face images. In this paper, we comprehensively investigate the effect
of instance normalization on generating photorealistic face images from
hand-drawn sketches. We first introduce a visualization approach to analyze the
feature embedding for sketches with a group of specific changes. Based on the
visual analysis, we modify the instance normalization layers in the baseline
image translation model. We elaborate a new set of hand-drawn sketches with 11
categories of specially designed changes and conduct extensive experimental
analysis. The results and user studies demonstrate that our method markedly
improve the quality of synthesized images and the conformance with user
intention.
Related papers
- CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Reference-based Image Composition with Sketch via Structure-aware
Diffusion Model [38.1193912666578]
We introduce a multi-input-conditioned image composition model that incorporates a sketch as a novel modal, alongside a reference image.
Thanks to the edge-level controllability using sketches, our method enables a user to edit or complete an image sub-part.
Our framework fine-tunes a pre-trained diffusion model to complete missing regions using the reference image while maintaining sketch guidance.
arXiv Detail & Related papers (2023-03-31T06:12:58Z) - Adaptively-Realistic Image Generation from Stroke and Sketch with
Diffusion Model [31.652827838300915]
We propose a unified framework supporting a three-dimensional control over the image synthesis from sketches and strokes based on diffusion models.
Our framework achieves state-of-the-art performance while providing flexibility in generating customized images with control over shape, color, and realism.
Our method unleashes applications such as editing on real images, generation with partial sketches and strokes, and multi-domain multi-modal synthesis.
arXiv Detail & Related papers (2022-08-26T13:59:26Z) - Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space
Navigation [136.53288628437355]
Controllable semantic image editing enables a user to change entire image attributes with few clicks.
Current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism.
We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work which primarily focuses on qualitative evaluation.
arXiv Detail & Related papers (2021-02-01T21:38:36Z) - Unsupervised Contrastive Photo-to-Caricature Translation based on
Auto-distortion [49.93278173824292]
Photo-to-caricature aims to synthesize the caricature as a rendered image exaggerating the features through sketching, pencil strokes, or other artistic drawings.
Style rendering and geometry deformation are the most important aspects in photo-to-caricature translation task.
We propose an unsupervised contrastive photo-to-caricature translation architecture.
arXiv Detail & Related papers (2020-11-10T08:14:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Learning to Caricature via Semantic Shape Transform [95.25116681761142]
We propose an algorithm based on a semantic shape transform to produce shape exaggerations.
We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures.
arXiv Detail & Related papers (2020-08-12T03:41:49Z) - Generating Person Images with Appearance-aware Pose Stylizer [66.44220388377596]
We present a novel end-to-end framework to generate realistic person images based on given person poses and appearances.
The core of our framework is a novel generator called Appearance-aware Pose Stylizer (APS) which generates human images by coupling the target pose with the conditioned person appearance progressively.
arXiv Detail & Related papers (2020-07-17T15:58:05Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.