SketchXAI: A First Look at Explainability for Human Sketches
- URL: http://arxiv.org/abs/2304.11744v1
- Date: Sun, 23 Apr 2023 20:28:38 GMT
- Title: SketchXAI: A First Look at Explainability for Human Sketches
- Authors: Zhiyu Qu, Yulia Gryaditskaya, Ke Li, Kaiyue Pang, Tao Xiang, Yi-Zhe
Song
- Abstract summary: This paper introduces human sketches to the landscape of XAI (Explainable Artificial Intelligence)
We argue that sketch as a human-centred'' data form, represents a natural interface to study explainability.
We design a sketch encoder that accommodates the intrinsic properties of strokes: shape, location, and order.
- Score: 104.13322289903577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper, for the very first time, introduces human sketches to the
landscape of XAI (Explainable Artificial Intelligence). We argue that sketch as
a ``human-centred'' data form, represents a natural interface to study
explainability. We focus on cultivating sketch-specific explainability designs.
This starts by identifying strokes as a unique building block that offers a
degree of flexibility in object construction and manipulation impossible in
photos. Following this, we design a simple explainability-friendly sketch
encoder that accommodates the intrinsic properties of strokes: shape, location,
and order. We then move on to define the first ever XAI task for sketch, that
of stroke location inversion SLI. Just as we have heat maps for photos, and
correlation matrices for text, SLI offers an explainability angle to sketch in
terms of asking a network how well it can recover stroke locations of an unseen
sketch. We offer qualitative results for readers to interpret as snapshots of
the SLI process in the paper, and as GIFs on the project page. A minor but
interesting note is that thanks to its sketch-specific design, our sketch
encoder also yields the best sketch recognition accuracy to date while having
the smallest number of parameters. The code is available at
\url{https://sketchxai.github.io}.
Related papers
- SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in
Context [112.07988211268612]
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO.
Our dataset comprises 10,000 freehand scene vector sketches with per point space-time information by 100 non-expert individuals.
We study for the first time the problem of the fine-grained image retrieval from freehand scene sketches and sketch captions.
arXiv Detail & Related papers (2022-03-04T03:00:51Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.