Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches
- URL: http://arxiv.org/abs/2001.02890v1
- Date: Thu, 9 Jan 2020 08:57:50 GMT
- Title: Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches
- Authors: Shuai Yang, Zhangyang Wang, Jiaying Liu, Zongming Guo
- Abstract summary: Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
- Score: 133.01690754567252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketch-based image editing aims to synthesize and modify photos based on the
structural information provided by the human-drawn sketches. Since sketches are
difficult to collect, previous methods mainly use edge maps instead of sketches
to train models (referred to as edge-based models). However, sketches display
great structural discrepancy with edge maps, thus failing edge-based models.
Moreover, sketches often demonstrate huge variety among different users,
demanding even higher generalizability and robustness for the editing model to
work. In this paper, we propose Deep Plastic Surgery, a novel, robust and
controllable image editing framework that allows users to interactively edit
images using hand-drawn sketch inputs. We present a sketch refinement strategy,
as inspired by the coarse-to-fine drawing process of the artists, which we show
can help our model well adapt to casual and varied sketches without the need
for real sketch training data. Our model further provides a refinement level
control parameter that enables users to flexibly define how "reliable" the
input sketch should be considered for the final output, balancing between
sketch faithfulness and output verisimilitude (as the two goals might
contradict if the input sketch is drawn poorly). To achieve the multi-level
refinement, we introduce a style-based module for level conditioning, which
allows adaptive feature representations for different levels in a singe
network. Extensive experimental results demonstrate the superiority of our
approach in improving the visual quality and user controllablity of image
editing over the state-of-the-art methods.
Related papers
- It's All About Your Sketch: Democratising Sketch Control in Diffusion Models [114.73766136068357]
This paper unravels the potential of sketches for diffusion models, addressing the deceptive promise of direct sketch control in generative AI.
We importantly democratise the process, enabling amateur sketches to generate precise images, living up to the commitment of "what you sketch is what you get"
arXiv Detail & Related papers (2024-03-12T01:05:25Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - DiffSketching: Sketch Control Image Synthesis with Diffusion Models [10.172753521953386]
Deep learning models for sketch-to-image synthesis need to overcome the distorted input sketch without visual details.
Our model matches sketches through the cross domain constraints, and uses a classifier to guide the image synthesis more accurately.
Our model can beat GAN-based method in terms of generation quality and human evaluation, and does not rely on massive sketch-image datasets.
arXiv Detail & Related papers (2023-05-30T07:59:23Z) - SketchFFusion: Sketch-guided image editing with diffusion model [25.63913085329606]
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user.
We propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user.
arXiv Detail & Related papers (2023-04-06T15:54:18Z) - Reference-based Image Composition with Sketch via Structure-aware
Diffusion Model [38.1193912666578]
We introduce a multi-input-conditioned image composition model that incorporates a sketch as a novel modal, alongside a reference image.
Thanks to the edge-level controllability using sketches, our method enables a user to edit or complete an image sub-part.
Our framework fine-tunes a pre-trained diffusion model to complete missing regions using the reference image while maintaining sketch guidance.
arXiv Detail & Related papers (2023-03-31T06:12:58Z) - Unsupervised Scene Sketch to Photo Synthesis [40.044690369936184]
We present a method for synthesizing realistic photos from scene sketches.
Our framework learns from readily available large-scale photo datasets in an unsupervised manner.
We also demonstrate that our framework facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches.
arXiv Detail & Related papers (2022-09-06T22:25:06Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Sketch-Guided Scenery Image Outpainting [83.6612152173028]
We propose an encoder-decoder based network to conduct sketch-guided outpainting.
We apply a holistic alignment module to make the synthesized part be similar to the real one from the global view.
Second, we reversely produce the sketches from the synthesized part and encourage them be consistent with the ground-truth ones.
arXiv Detail & Related papers (2020-06-17T11:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.