SketchFFusion: Sketch-guided image editing with diffusion model
- URL: http://arxiv.org/abs/2304.03174v3
- Date: Tue, 14 Nov 2023 03:53:24 GMT
- Title: SketchFFusion: Sketch-guided image editing with diffusion model
- Authors: Weihang Mao, Bo Han, Zihao Wang
- Abstract summary: Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user.
We propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user.
- Score: 25.63913085329606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketch-guided image editing aims to achieve local fine-tuning of the image
based on the sketch information provided by the user, while maintaining the
original status of the unedited areas. Due to the high cost of acquiring human
sketches, previous works mostly relied on edge maps as a substitute for
sketches, but sketches possess more rich structural information. In this paper,
we propose a sketch generation scheme that can preserve the main contours of an
image and closely adhere to the actual sketch style drawn by the user.
Simultaneously, current image editing methods often face challenges such as
image distortion, training cost, and loss of fine details in the sketch. To
address these limitations, We propose a conditional diffusion model
(SketchFFusion) based on the sketch structure vector. We evaluate the
generative performance of our model and demonstrate that it outperforms
existing methods.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - It's All About Your Sketch: Democratising Sketch Control in Diffusion Models [114.73766136068357]
This paper unravels the potential of sketches for diffusion models, addressing the deceptive promise of direct sketch control in generative AI.
We importantly democratise the process, enabling amateur sketches to generate precise images, living up to the commitment of "what you sketch is what you get"
arXiv Detail & Related papers (2024-03-12T01:05:25Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - SketchLattice: Latticed Representation for Sketch Manipulation [30.092468954557468]
Key challenge in designing a sketch representation lies with handling the abstract and iconic nature of sketches.
We propose a lattice structured sketch representation that not only removes the bottleneck of requiring vector data but also preserves the structural cues that vector data provides.
Our lattice representation could be effectively encoded using a graph model, that uses significantly fewer model parameters (13.5 times lesser) than existing state-of-the-art.
arXiv Detail & Related papers (2021-08-26T08:02:21Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z) - Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image
Retrieval [203.2520862597357]
Fine-grained sketch-based image retrieval (FG-SBIR) addresses the problem of retrieving a particular photo instance given a user's query sketch.
We reformulate the conventional FG-SBIR framework to tackle these challenges.
We propose an on-the-fly design that starts retrieving as soon as the user starts drawing.
arXiv Detail & Related papers (2020-02-24T15:36:02Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.