Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings
- URL: http://arxiv.org/abs/2303.11502v3
- Date: Thu, 30 Mar 2023 15:08:36 GMT
- Title: Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings
- Authors: Ayan Kumar Bhunia, Subhadeep Koley, Amandeep Kumar, Aneeshan Sain,
Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
- Abstract summary: We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
- Score: 99.9788496281408
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human sketch has already proved its worth in various visual understanding
tasks (e.g., retrieval, segmentation, image-captioning, etc). In this paper, we
reveal a new trait of sketches - that they are also salient. This is intuitive
as sketching is a natural attentive process at its core. More specifically, we
aim to study how sketches can be used as a weak label to detect salient objects
present in an image. To this end, we propose a novel method that emphasises on
how "salient object" could be explained by hand-drawn sketches. To accomplish
this, we introduce a photo-to-sketch generation model that aims to generate
sequential sketch coordinates corresponding to a given visual photo through a
2D attention mechanism. Attention maps accumulated across the time steps give
rise to salient regions in the process. Extensive quantitative and qualitative
experiments prove our hypothesis and delineate how our sketch-based saliency
detection model gives a competitive performance compared to the
state-of-the-art.
Related papers
- What Can Human Sketches Do for Object Detection? [127.67444974452411]
Sketches are highly expressive, inherently capturing subjective and fine-grained visual cues.
A sketch-enabled object detection framework detects based on what textityou sketch -- textitthat zebra''
We show an intuitive synergy between foundation models (e.g., CLIP) and existing sketch models build for sketch-based image retrieval (SBIR)
In particular, we first perform independent on both sketch branches of an encoder model to build highly generalisable sketch and photo encoders.
arXiv Detail & Related papers (2023-03-27T12:33:23Z) - Towards Practicality of Sketch-Based Visual Understanding [15.30818342202786]
Sketches have been used to conceptualise and depict visual objects from pre-historic times.
This thesis aims to progress sketch-based visual understanding towards more practicality.
arXiv Detail & Related papers (2022-10-27T03:12:57Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - CLIPasso: Semantically-Aware Object Sketching [34.53644912236454]
We present an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications.
We define a sketch as a set of B'ezier curves and use a differentiizer to optimize the parameters of the curves directly with respect to a CLIP-based perceptual loss.
arXiv Detail & Related papers (2022-02-11T18:35:25Z) - SketchEmbedNet: Learning Novel Concepts by Imitating Drawings [125.45799722437478]
We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
arXiv Detail & Related papers (2020-08-27T16:43:28Z) - Cross-Modal Hierarchical Modelling for Fine-Grained Sketch Based Image
Retrieval [147.24102408745247]
We study a further trait of sketches that has been overlooked to date, that is, they are hierarchical in terms of the levels of detail.
In this paper, we design a novel network that is capable of cultivating sketch-specific hierarchies and exploiting them to match sketch with photo at corresponding hierarchical levels.
arXiv Detail & Related papers (2020-07-29T20:50:25Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.