Unsupervised Scene Sketch to Photo Synthesis
- URL: http://arxiv.org/abs/2209.02834v1
- Date: Tue, 6 Sep 2022 22:25:06 GMT
- Title: Unsupervised Scene Sketch to Photo Synthesis
- Authors: Jiayun Wang, Sangryul Jeon, Stella X. Yu, Xi Zhang, Himanshu Arora, Yu
Lou
- Abstract summary: We present a method for synthesizing realistic photos from scene sketches.
Our framework learns from readily available large-scale photo datasets in an unsupervised manner.
We also demonstrate that our framework facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches.
- Score: 40.044690369936184
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sketches make an intuitive and powerful visual expression as they are fast
executed freehand drawings. We present a method for synthesizing realistic
photos from scene sketches. Without the need for sketch and photo pairs, our
framework directly learns from readily available large-scale photo datasets in
an unsupervised manner. To this end, we introduce a standardization module that
provides pseudo sketch-photo pairs during training by converting photos and
sketches to a standardized domain, i.e. the edge map. The reduced domain gap
between sketch and photo also allows us to disentangle them into two
components: holistic scene structures and low-level visual styles such as color
and texture. Taking this advantage, we synthesize a photo-realistic image by
combining the structure of a sketch and the visual style of a reference photo.
Extensive experimental results on perceptual similarity metrics and human
perceptual studies show the proposed method could generate realistic photos
with high fidelity from scene sketches and outperform state-of-the-art photo
synthesis baselines. We also demonstrate that our framework facilitates a
controllable manipulation of photo synthesis by editing strokes of
corresponding sketches, delivering more fine-grained details than previous
approaches that rely on region-level editing.
Related papers
- CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - Text-Guided Scene Sketch-to-Photo Synthesis [5.431298869139175]
We propose a method for scene-level sketch-to-photo synthesis with text guidance.
To train our model, we use self-supervised learning from a set of photographs.
Experiments show that the proposed method translates original sketch images that are not extracted from color images into photos with compelling visual quality.
arXiv Detail & Related papers (2023-02-14T08:13:36Z) - Scene Designer: a Unified Model for Scene Search and Synthesis from
Sketch [7.719705312172286]
Scene Designer is a novel method for searching and generating images using free-hand sketches of scene compositions.
Our core contribution is a single unified model to learn both a cross-modal search embedding for matching sketched compositions to images, and an object embedding for layout synthesis.
arXiv Detail & Related papers (2021-08-16T21:40:16Z) - Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis [42.83974176146334]
We explore the open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label.
It is challenging due to the lack of training supervision and the large geometry distortion between the freehand sketch and photo domains.
We propose a framework that jointly learns sketch-to-photo and photo-to-sketch generation.
arXiv Detail & Related papers (2021-04-12T17:58:46Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image
Retrieval [147.24102408745247]
We study a further trait of sketches that has been overlooked to date, that is, they are hierarchical in terms of the levels of detail.
In this paper, we design a novel network that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels.
arXiv Detail & Related papers (2020-07-29T20:50:25Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.