CLIPascene: Scene Sketching with Different Types and Levels of
Abstraction
- URL: http://arxiv.org/abs/2211.17256v2
- Date: Mon, 1 May 2023 15:33:55 GMT
- Title: CLIPascene: Scene Sketching with Different Types and Levels of
Abstraction
- Authors: Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, Ariel Shamir
- Abstract summary: We present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction.
The first considers the fidelity of the sketch, varying its representation from a more precise portrayal of the input to a looser depiction.
The second is defined by the visual simplicity of the sketch, moving from a detailed depiction to a sparse sketch.
- Score: 48.30702300230904
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we present a method for converting a given scene image into a
sketch using different types and multiple levels of abstraction. We distinguish
between two types of abstraction. The first considers the fidelity of the
sketch, varying its representation from a more precise portrayal of the input
to a looser depiction. The second is defined by the visual simplicity of the
sketch, moving from a detailed depiction to a sparse sketch. Using an explicit
disentanglement into two abstraction axes -- and multiple levels for each one
-- provides users additional control over selecting the desired sketch based on
their personal goals and preferences. To form a sketch at a given level of
fidelity and simplification, we train two MLP networks. The first network
learns the desired placement of strokes, while the second network learns to
gradually remove strokes from the sketch without harming its recognizability
and semantics. Our approach is able to generate sketches of complex scenes
including those with complex backgrounds (e.g., natural and urban settings) and
subjects (e.g., animals and people) while depicting gradual abstractions of the
input scene in terms of fidelity and simplicity.
Related papers
- Do Generalised Classifiers really work on Human Drawn Sketches? [122.11670266648771]
This paper marries large foundation models with human sketch understanding.
We demonstrate what this brings -- a paradigm shift in terms of generalised sketch representation learning.
Our framework surpasses popular sketch representation learning algorithms in both zero-shot and few-shot setups.
arXiv Detail & Related papers (2024-07-04T12:37:08Z) - Stylized Face Sketch Extraction via Generative Prior with Limited Data [6.727433982111717]
StyleSketch is a method for extracting high-resolution stylized sketches from a face image.
Using the rich semantics of the deep features from a pretrained StyleGAN, we are able to train a sketch generator with 16 pairs of face and the corresponding sketch images.
arXiv Detail & Related papers (2024-03-17T16:25:25Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Breathing Life Into Sketches Using Text-to-Video Priors [101.8236605955899]
A sketch is one of the most intuitive and versatile tools humans use to convey their ideas visually.
In this work, we present a method that automatically adds motion to a single-subject sketch.
The output is a short animation provided in vector representation, which can be easily edited.
arXiv Detail & Related papers (2023-11-21T18:09:30Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in
Context [112.07988211268612]
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO.
Our dataset comprises 10,000 freehand scene vector sketches with per point space-time information by 100 non-expert individuals.
We study for the first time the problem of the fine-grained image retrieval from freehand scene sketches and sketch captions.
arXiv Detail & Related papers (2022-03-04T03:00:51Z) - CLIPasso: Semantically-Aware Object Sketching [34.53644912236454]
We present an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications.
We define a sketch as a set of B'ezier curves and use a differentiizer to optimize the parameters of the curves directly with respect to a CLIP-based perceptual loss.
arXiv Detail & Related papers (2022-02-11T18:35:25Z) - One Sketch for All: One-Shot Personalized Sketch Segmentation [84.45203849671003]
We present the first one-shot personalized sketch segmentation method.
We aim to segment all sketches belonging to the same category with a single sketch with a given part annotation.
We preserve the parts semantics embedded in the exemplar, and we are robust to input style and abstraction.
arXiv Detail & Related papers (2021-12-20T20:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.