CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing
- URL: http://arxiv.org/abs/2402.17624v1
- Date: Tue, 27 Feb 2024 15:52:59 GMT
- Title: CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing
- Authors: Chufeng Xiao and Hongbo Fu
- Abstract summary: Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
- Score: 21.12815542848095
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Personalization techniques for large text-to-image (T2I) models allow users
to incorporate new concepts from reference images. However, existing methods
primarily rely on textual descriptions, leading to limited control over
customized images and failing to support fine-grained and local editing (e.g.,
shape, pose, and details). In this paper, we identify sketches as an intuitive
and versatile representation that can facilitate such control, e.g., contour
lines capturing shape information and flow lines representing texture. This
motivates us to explore a novel task of sketch concept extraction: given one or
more sketch-image pairs, we aim to extract a special sketch concept that
bridges the correspondence between the images and sketches, thus enabling
sketch-based image synthesis and editing at a fine-grained level. To accomplish
this, we introduce CustomSketching, a two-stage framework for extracting novel
sketch concepts. Considering that an object can often be depicted by a contour
for general shapes and additional strokes for internal details, we introduce a
dual-sketch representation to reduce the inherent ambiguity in sketch
depiction. We employ a shape loss and a regularization loss to balance fidelity
and editability during optimization. Through extensive experiments, a user
study, and several applications, we show our method is effective and superior
to the adapted baselines.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - Block and Detail: Scaffolding Sketch-to-Image Generation [65.56590359051634]
We introduce a novel sketch-to-image tool that aligns with the iterative refinement process of artists.
Our tool lets users sketch blocking strokes to coarsely represent the placement and form of objects and detail strokes to refine their shape and silhouettes.
We develop a two-pass algorithm for generating high-fidelity images from such sketches at any point in the iterative process.
arXiv Detail & Related papers (2024-02-28T07:09:31Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - SketchFFusion: Sketch-guided image editing with diffusion model [25.63913085329606]
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user.
We propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user.
arXiv Detail & Related papers (2023-04-06T15:54:18Z) - Reference-based Image Composition with Sketch via Structure-aware
Diffusion Model [38.1193912666578]
We introduce a multi-input-conditioned image composition model that incorporates a sketch as a novel modal, alongside a reference image.
Thanks to the edge-level controllability using sketches, our method enables a user to edit or complete an image sub-part.
Our framework fine-tunes a pre-trained diffusion model to complete missing regions using the reference image while maintaining sketch guidance.
arXiv Detail & Related papers (2023-03-31T06:12:58Z) - Unsupervised Scene Sketch to Photo Synthesis [40.044690369936184]
We present a method for synthesizing realistic photos from scene sketches.
Our framework learns from readily available large-scale photo datasets in an unsupervised manner.
We also demonstrate that our framework facilitates a controllable manipulation of photo synthesis by editing strokes of corresponding sketches.
arXiv Detail & Related papers (2022-09-06T22:25:06Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.