SketchEmbedNet: Learning Novel Concepts by Imitating Drawings
- URL: http://arxiv.org/abs/2009.04806v4
- Date: Tue, 22 Jun 2021 19:45:09 GMT
- Title: SketchEmbedNet: Learning Novel Concepts by Imitating Drawings
- Authors: Alexander Wang, Mengye Ren, Richard S. Zemel
- Abstract summary: We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
- Score: 125.45799722437478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketch drawings capture the salient information of visual concepts. Previous
work has shown that neural networks are capable of producing sketches of
natural objects drawn from a small number of classes. While earlier approaches
focus on generation quality or retrieval, we explore properties of image
representations learned by training a model to produce sketches of images. We
show that this generative, class-agnostic model produces informative embeddings
of images from novel examples, classes, and even novel datasets in a few-shot
setting. Additionally, we find that these learned representations exhibit
interesting structure and compositionality.
Related papers
- Unsupervised Compositional Concepts Discovery with Text-to-Image
Generative Models [80.75258849913574]
In this paper, we consider the inverse problem -- given a collection of different images, can we discover the generative concepts that represent each image?
We present an unsupervised approach to discover generative concepts from a collection of images, disentangling different art styles in paintings, objects, and lighting from kitchen scenes, and discovering image classes given ImageNet images.
arXiv Detail & Related papers (2023-06-08T17:02:15Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches [100.3966994660079]
We present a framework that infuses (i) gradient consensus for domain invariant learning, (ii) knowledge distillation for preserving old class information, and (iii) graph attention networks for message passing between old and novel classes.
We experimentally show that sketches are better class support than text in the context of FSCIL.
arXiv Detail & Related papers (2022-03-28T15:35:33Z) - Scene Designer: a Unified Model for Scene Search and Synthesis from
Sketch [7.719705312172286]
Scene Designer is a novel method for searching and generating images using free-hand sketches of scene compositions.
Our core contribution is a single unified model to learn both a cross-modal search embedding for matching sketched compositions to images, and an object embedding for layout synthesis.
arXiv Detail & Related papers (2021-08-16T21:40:16Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.