Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis
- URL: http://arxiv.org/abs/2104.05703v1
- Date: Mon, 12 Apr 2021 17:58:46 GMT
- Title: Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis
- Authors: Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen, Jan P.
Allebach
- Abstract summary: We explore the open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label.
It is challenging due to the lack of training supervision and the large geometry distortion between the freehand sketch and photo domains.
We propose a framework that jointly learns sketch-to-photo and photo-to-sketch generation.
- Score: 42.83974176146334
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we explore the open-domain sketch-to-photo translation, which
aims to synthesize a realistic photo from a freehand sketch with its class
label, even if the sketches of that class are missing in the training data. It
is challenging due to the lack of training supervision and the large geometry
distortion between the freehand sketch and photo domains. To synthesize the
absent freehand sketches from photos, we propose a framework that jointly
learns sketch-to-photo and photo-to-sketch generation. However, the generator
trained from fake sketches might lead to unsatisfying results when dealing with
sketches of missing classes, due to the domain gap between synthesized sketches
and real ones. To alleviate this issue, we further propose a simple yet
effective open-domain sampling and optimization strategy to "fool" the
generator into treating fake sketches as real ones. Our method takes advantage
of the learned sketch-to-photo and photo-to-sketch mapping of in-domain data
and generalizes them to the open-domain classes. We validate our method on the
Scribble and SketchyCOCO datasets. Compared with the recent competing methods,
our approach shows impressive results in synthesizing realistic color, texture,
and maintaining the geometric composition for various categories of open-domain
sketches.
Related papers
- Adapt and Align to Improve Zero-Shot Sketch-Based Image Retrieval [85.39613457282107]
Cross-domain nature of sketch-based image retrieval is challenging.
We present an effective Adapt and Align'' approach to address the key challenges.
Inspired by recent advances in image-text foundation models (e.g., CLIP) on zero-shot scenarios, we explicitly align the learned image embedding with a more semantic text embedding to achieve the desired knowledge transfer from seen to unseen classes.
arXiv Detail & Related papers (2023-05-09T03:10:15Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - Text-Guided Scene Sketch-to-Photo Synthesis [5.431298869139175]
We propose a method for scene-level sketch-to-photo synthesis with text guidance.
To train our model, we use self-supervised learning from a set of photographs.
Experiments show that the proposed method translates original sketch images that are not extracted from color images into photos with compelling visual quality.
arXiv Detail & Related papers (2023-02-14T08:13:36Z) - Unsupervised Scene Sketch to Photo Synthesis [40.044690369936184]
We present a method for synthesizing realistic photos from scene sketches.
Our framework learns from readily available large-scale photo datasets in an unsupervised manner.
We also demonstrate that our framework facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches.
arXiv Detail & Related papers (2022-09-06T22:25:06Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Multi-granularity Association Learning Framework for on-the-fly
Fine-Grained Sketch-based Image Retrieval [7.797006835701767]
Fine-grained sketch-based image retrieval (FG-SBIR) addresses the problem of retrieving a particular photo in a given query sketch.
In this study, we aim to retrieve the target photo with the least number of strokes possible (incomplete sketch)
We propose a multi-granularity association learning framework that further optimize the embedding space of all incomplete sketches.
arXiv Detail & Related papers (2022-01-13T14:38:50Z) - Adversarial Open Domain Adaption Framework (AODA): Sketch-to-Photo
Synthesis [0.0]
unsupervised open domain for generating realistic photos from a hand-drawn sketch is challenging.
We present an approach that learns both sketch-to-photo and photo-to-sketch generation to synthesise the missing freehand drawings from pictures.
arXiv Detail & Related papers (2021-07-28T18:21:20Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.