Adversarial Open Domain Adaption Framework (AODA): Sketch-to-Photo
Synthesis
- URL: http://arxiv.org/abs/2108.04351v1
- Date: Wed, 28 Jul 2021 18:21:20 GMT
- Title: Adversarial Open Domain Adaption Framework (AODA): Sketch-to-Photo
Synthesis
- Authors: Amey Thakur and Mega Satish
- Abstract summary: unsupervised open domain for generating realistic photos from a hand-drawn sketch is challenging.
We present an approach that learns both sketch-to-photo and photo-to-sketch generation to synthesise the missing freehand drawings from pictures.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to demonstrate the efficiency of the Adversarial Open Domain
Adaption framework for sketch-to-photo synthesis. The unsupervised open domain
adaption for generating realistic photos from a hand-drawn sketch is
challenging as there is no such sketch of that class for training data. The
absence of learning supervision and the huge domain gap between both the
freehand drawing and picture domains make it hard. We present an approach that
learns both sketch-to-photo and photo-to-sketch generation to synthesise the
missing freehand drawings from pictures. Due to the domain gap between
synthetic sketches and genuine ones, the generator trained on false drawings
may produce unsatisfactory results when dealing with drawings of lacking
classes. To address this problem, we offer a simple but effective open-domain
sampling and optimization method that tricks the generator into considering
false drawings as genuine. Our approach generalises the learnt sketch-to-photo
and photo-to-sketch mappings from in-domain input to open-domain categories. On
the Scribble and SketchyCOCO datasets, we compared our technique to the most
current competing methods. For many types of open-domain drawings, our model
outperforms impressive results in synthesising accurate colour, substance, and
retaining the structural layout.
Related papers
- DiffSketching: Sketch Control Image Synthesis with Diffusion Models [10.172753521953386]
Deep learning models for sketch-to-image synthesis need to overcome the distorted input sketch without visual details.
Our model matches sketches through the cross domain constraints, and uses a classifier to guide the image synthesis more accurately.
Our model can beat GAN-based method in terms of generation quality and human evaluation, and does not rely on massive sketch-image datasets.
arXiv Detail & Related papers (2023-05-30T07:59:23Z) - Adapt and Align to Improve Zero-Shot Sketch-Based Image Retrieval [85.39613457282107]
Cross-domain nature of sketch-based image retrieval is challenging.
We present an effective Adapt and Align'' approach to address the key challenges.
Inspired by recent advances in image-text foundation models (e.g., CLIP) on zero-shot scenarios, we explicitly align the learned image embedding with a more semantic text embedding to achieve the desired knowledge transfer from seen to unseen classes.
arXiv Detail & Related papers (2023-05-09T03:10:15Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - Unsupervised Scene Sketch to Photo Synthesis [40.044690369936184]
We present a method for synthesizing realistic photos from scene sketches.
Our framework learns from readily available large-scale photo datasets in an unsupervised manner.
We also demonstrate that our framework facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches.
arXiv Detail & Related papers (2022-09-06T22:25:06Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis [42.83974176146334]
We explore the open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label.
It is challenging due to the lack of training supervision and the large geometry distortion between the freehand sketch and photo domains.
We propose a framework that jointly learns sketch-to-photo and photo-to-sketch generation.
arXiv Detail & Related papers (2021-04-12T17:58:46Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.