CLIPasso: Semantically-Aware Object Sketching
- URL: http://arxiv.org/abs/2202.05822v1
- Date: Fri, 11 Feb 2022 18:35:25 GMT
- Title: CLIPasso: Semantically-Aware Object Sketching
- Authors: Yael Vinker, Ehsan Pajouheshgar, Jessica Y. Bo, Roman Christian
Bachmann, Amit Haim Bermano, Daniel Cohen-Or, Amir Zamir, Ariel Shamir
- Abstract summary: We present an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications.
We define a sketch as a set of B'ezier curves and use a differentiizer to optimize the parameters of the curves directly with respect to a CLIP-based perceptual loss.
- Score: 34.53644912236454
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Abstraction is at the heart of sketching due to the simple and minimal nature
of line drawings. Abstraction entails identifying the essential visual
properties of an object or scene, which requires semantic understanding and
prior knowledge of high-level concepts. Abstract depictions are therefore
challenging for artists, and even more so for machines. We present an object
sketching method that can achieve different levels of abstraction, guided by
geometric and semantic simplifications. While sketch generation methods often
rely on explicit sketch datasets for training, we utilize the remarkable
ability of CLIP (Contrastive-Language-Image-Pretraining) to distill semantic
concepts from sketches and images alike. We define a sketch as a set of
B\'ezier curves and use a differentiable rasterizer to optimize the parameters
of the curves directly with respect to a CLIP-based perceptual loss. The
abstraction degree is controlled by varying the number of strokes. The
generated sketches demonstrate multiple levels of abstraction while maintaining
recognizability, underlying structure, and essential visual components of the
subject drawn.
Related papers
- Do Generalised Classifiers really work on Human Drawn Sketches? [122.11670266648771]
This paper marries large foundation models with human sketch understanding.
We demonstrate what this brings -- a paradigm shift in terms of generalised sketch representation learning.
Our framework surpasses popular sketch representation learning algorithms in both zero-shot and few-shot setups.
arXiv Detail & Related papers (2024-07-04T12:37:08Z) - How to Handle Sketch-Abstraction in Sketch-Based Image Retrieval? [120.49126407479717]
We propose a sketch-based image retrieval framework capable of handling sketch abstraction at varied levels.
For granularity-level abstraction understanding, we dictate that the retrieval model should not treat all abstraction-levels equally.
Our Acc.@q loss uniquely allows a sketch to narrow/broaden its focus in terms of how stringent the evaluation should be.
arXiv Detail & Related papers (2024-03-11T23:08:29Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Learning Geometry-aware Representations by Sketching [20.957964436294873]
We propose learning to represent a scene by sketching, inspired by human behavior.
Our method, coined Learning by Sketching (LBS), learns to convert an image into a set of colored strokes that explicitly incorporate the geometric information of the scene.
arXiv Detail & Related papers (2023-04-17T12:23:32Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - CLIPascene: Scene Sketching with Different Types and Levels of
Abstraction [48.30702300230904]
We present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction.
The first considers the fidelity of the sketch, varying its representation from a more precise portrayal of the input to a looser depiction.
The second is defined by the visual simplicity of the sketch, moving from a detailed depiction to a sparse sketch.
arXiv Detail & Related papers (2022-11-30T18:54:32Z) - Abstracting Sketches through Simple Primitives [53.04827416243121]
Humans show high-level of abstraction capabilities in games that require quickly communicating object information.
We propose the Primitive-based Sketch Abstraction task where the goal is to represent sketches using a fixed set of drawing primitives.
Our Primitive-Matching Network (PMN), learns interpretable abstractions of a sketch in a self supervised manner.
arXiv Detail & Related papers (2022-07-27T14:32:39Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.