SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches
- URL: http://arxiv.org/abs/2111.15078v1
- Date: Tue, 30 Nov 2021 02:42:31 GMT
- Title: SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches
- Authors: Yu Zeng, Zhe Lin, Vishal M. Patel
- Abstract summary: We propose a new paradigm of sketch-based image manipulation: mask-free local image manipulation.
Our model automatically predicts the target modification region and encodes it into a structure style vector.
A generator then synthesizes the new image content based on the style vector and sketch.
- Score: 95.45728042499836
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sketch-based image manipulation is an interactive image editing task to
modify an image based on input sketches from users. Existing methods typically
formulate this task as a conditional inpainting problem, which requires users
to draw an extra mask indicating the region to modify in addition to sketches.
The masked regions are regarded as holes and filled by an inpainting model
conditioned on the sketch. With this formulation, paired training data can be
easily obtained by randomly creating masks and extracting edges or contours.
Although this setup simplifies data preparation and model design, it
complicates user interaction and discards useful information in masked regions.
To this end, we investigate a new paradigm of sketch-based image manipulation:
mask-free local image manipulation, which only requires sketch inputs from
users and utilizes the entire original image. Given an image and sketch, our
model automatically predicts the target modification region and encodes it into
a structure agnostic style vector. A generator then synthesizes the new image
content based on the style vector and sketch. The manipulated image is finally
produced by blending the generator output into the modification region of the
original image. Our model can be trained in a self-supervised fashion by
learning the reconstruction of an image region from the style vector and
sketch. The proposed method offers simpler and more intuitive user workflows
for sketch-based image manipulation and provides better results than previous
approaches. More results, code and interactive demo will be available at
\url{https://zengxianyu.github.io/sketchedit}.
Related papers
- Sketch-guided Image Inpainting with Partial Discrete Diffusion Process [5.005162730122933]
We introduce a novel partial discrete diffusion process (PDDP) for sketch-guided inpainting.
PDDP corrupts the masked regions of the image and reconstructs these masked regions conditioned on hand-drawn sketches.
The proposed novel transformer module accepts two inputs -- the image containing the masked region to be inpainted and the query sketch to model the reverse diffusion process.
arXiv Detail & Related papers (2024-04-18T07:07:38Z) - Block and Detail: Scaffolding Sketch-to-Image Generation [65.56590359051634]
We introduce a novel sketch-to-image tool that aligns with the iterative refinement process of artists.
Our tool lets users sketch blocking strokes to coarsely represent the placement and form of objects and detail strokes to refine their shape and silhouettes.
We develop a two-pass algorithm for generating high-fidelity images from such sketches at any point in the iterative process.
arXiv Detail & Related papers (2024-02-28T07:09:31Z) - Towards Interactive Image Inpainting via Sketch Refinement [13.34066589008464]
We propose a two-stage image inpainting method termed SketchRefiner.
In the first stage, we propose using a cross-correlation loss function to robustly calibrate and refine the user-provided sketches.
In the second stage, we learn to extract informative features from the abstracted sketches in the feature space and modulate the inpainting process.
arXiv Detail & Related papers (2023-06-01T07:15:54Z) - SketchFFusion: Sketch-guided image editing with diffusion model [25.63913085329606]
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user.
We propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user.
arXiv Detail & Related papers (2023-04-06T15:54:18Z) - MaskSketch: Unpaired Structure-guided Masked Image Generation [56.88038469743742]
MaskSketch is an image generation method that allows spatial conditioning of the generation result using a guiding sketch as an extra conditioning signal during sampling.
We show that intermediate self-attention maps of a masked generative transformer encode important structural information of the input image.
Our results show that MaskSketch achieves high image realism and fidelity to the guiding structure.
arXiv Detail & Related papers (2023-02-10T20:27:02Z) - RePaint: Inpainting using Denoising Diffusion Probabilistic Models [161.74792336127345]
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask.
We propose RePaint: A Denoising Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks.
We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
arXiv Detail & Related papers (2022-01-24T18:40:15Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - Learning Layout and Style Reconfigurable GANs for Controllable Image
Synthesis [12.449076001538552]
This paper focuses on a recent emerged task, layout-to-image, to learn generative models capable of synthesizing photo-realistic images from spatial layout.
Style control at the image level is the same as in vanilla GANs, while style control at the object mask level is realized by a proposed novel feature normalization scheme.
In experiments, the proposed method is tested in the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained.
arXiv Detail & Related papers (2020-03-25T18:16:05Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.