Sketch-Guided Scenery Image Outpainting
- URL: http://arxiv.org/abs/2006.09788v2
- Date: Tue, 26 Jan 2021 12:19:13 GMT
- Title: Sketch-Guided Scenery Image Outpainting
- Authors: Yaxiong Wang, Yunchao Wei, Xueming Qian, Li Zhu, Yi Yang
- Abstract summary: We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
- Score: 83.6612152173028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The outpainting results produced by existing approaches are often too random
to meet users' requirement. In this work, we take the image outpainting one
step forward by allowing users to harvest personal custom outpainting results
using sketches as the guidance. To this end, we propose an encoder-decoder
based network to conduct sketch-guided outpainting, where two alignment modules
are adopted to impose the generated content to be realistic and consistent with
the provided sketches. First, we apply a holistic alignment module to make the
synthesized part be similar to the real one from the global view. Second, we
reversely produce the sketches from the synthesized part and encourage them be
consistent with the ground-truth ones using a sketch alignment module. In this
way, the learned generator will be imposed to pay more attention to fine
details and be sensitive to the guiding sketches. To our knowledge, this work
is the first attempt to explore the challenging yet meaningful conditional
scenery image outpainting. We conduct extensive experiments on two collected
benchmarks to qualitatively and quantitatively validate the effectiveness of
our approach compared with the other state-of-the-art generative models.
Related papers
- Sketch-guided Image Inpainting with Partial Discrete Diffusion Process [5.005162730122933]
We introduce a novel partial discrete diffusion process (PDDP) for sketch-guided inpainting.
PDDP corrupts the masked regions of the image and reconstructs these masked regions conditioned on hand-drawn sketches.
The proposed novel transformer module accepts two inputs -- the image containing the masked region to be inpainted and the query sketch to model the reverse diffusion process.
arXiv Detail & Related papers (2024-04-18T07:07:38Z) - Towards Interactive Image Inpainting via Sketch Refinement [13.34066589008464]
We propose a two-stage image inpainting method termed SketchRefiner.
In the first stage, we propose using a cross-correlation loss function to robustly calibrate and refine the user-provided sketches.
In the second stage, we learn to extract informative features from the abstracted sketches in the feature space and modulate the inpainting process.
arXiv Detail & Related papers (2023-06-01T07:15:54Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - Unsupervised Scene Sketch to Photo Synthesis [40.044690369936184]
We present a method for synthesizing realistic photos from scene sketches.
Our framework learns from readily available large-scale photo datasets in an unsupervised manner.
We also demonstrate that our framework facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches.
arXiv Detail & Related papers (2022-09-06T22:25:06Z) - Self-Supervised Sketch-to-Image Synthesis [21.40315235087551]
We study the exemplar-based sketch-to-image (s2i) synthesis task in a self-supervised learning manner.
We first propose an unsupervised method to efficiently synthesize line-sketches for general RGB-only datasets.
We then present a self-supervised Auto-Encoder (AE) to decouple the content/style features from sketches and RGB-images, and synthesize images that are both content-faithful to the sketches and style-consistent to the RGB-images.
arXiv Detail & Related papers (2020-12-16T22:14:06Z) - On Learning Semantic Representations for Million-Scale Free-Hand
Sketches [146.52892067335128]
We study learning semantic representations for million-scale free-hand sketches.
We propose a dual-branch CNNRNN network architecture to represent sketches.
We explore learning the sketch-oriented semantic representations in hashing retrieval and zero-shot recognition.
arXiv Detail & Related papers (2020-07-07T15:23:22Z) - SketchyCOCO: Image Generation from Freehand Scene Sketches [71.85577739612579]
We introduce the first method for automatic image generation from scene-level freehand sketches.
Key contribution is an attribute vector bridged Geneversarative Adrial Network called EdgeGAN.
We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution.
arXiv Detail & Related papers (2020-03-05T14:54:10Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.