Sketch-based Creativity Support Tools using Deep Learning
- URL: http://arxiv.org/abs/2111.09991v1
- Date: Fri, 19 Nov 2021 00:57:43 GMT
- Title: Sketch-based Creativity Support Tools using Deep Learning
- Authors: Forrest Huang, Eldon Schoop, David Ha, Jeffrey Nichols, John Canny
- Abstract summary: Recent developments in deep-learning models drastically improved machines' ability in understanding and generating visual content.
An exciting area of development explores deep-learning approaches used to model human sketches, opening opportunities for creative applications.
This chapter describes three fundamental steps in developing deep-learning-driven creativity support tools that consumes and generates sketches.
- Score: 23.366634691081593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sketching is a natural and effective visual communication medium commonly
used in creative processes. Recent developments in deep-learning models
drastically improved machines' ability in understanding and generating visual
content. An exciting area of development explores deep-learning approaches used
to model human sketches, opening opportunities for creative applications. This
chapter describes three fundamental steps in developing deep-learning-driven
creativity support tools that consumes and generates sketches: 1) a data
collection effort that generated a new paired dataset between sketches and
mobile user interfaces; 2) a sketch-based user interface retrieval system
adapted from state-of-the-art computer vision techniques; and, 3) a
conversational sketching system that supports the novel interaction of a
natural-language-based sketch/critique authoring process. In this chapter, we
survey relevant prior work in both the deep-learning and
human-computer-interaction communities, document the data collection process
and the systems' architectures in detail, present qualitative and quantitative
results, and paint the landscape of several future research directions in this
exciting area.
Related papers
- Choreographing the Digital Canvas: A Machine Learning Approach to Artistic Performance [9.218587190403174]
This paper introduces the concept of a design tool for artistic performances based on attribute descriptions.
The platform integrates a novel machine-learning (ML) model with an interactive interface to generate and visualize artistic movements.
arXiv Detail & Related papers (2024-03-26T01:42:13Z) - Sketch Input Method Editor: A Comprehensive Dataset and Methodology for Systematic Input Recognition [14.667745062352148]
This study aims to create a Sketch Input Method Editor (SketchIME) specifically designed for a professional C4I system.
Within this system, sketches are utilized as low-fidelity prototypes for recommending standardized symbols.
By incorporating few-shot domain adaptation and class-incremental learning, the network's ability to adapt to new users is significantly enhanced.
arXiv Detail & Related papers (2023-11-30T05:05:38Z) - SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - K-LITE: Learning Transferable Visual Models with External Knowledge [242.3887854728843]
K-LITE (Knowledge-augmented Language-Image Training and Evaluation) is a strategy to leverage external knowledge to build transferable visual systems.
In training, it enriches entities in natural language with WordNet and Wiktionary knowledge.
In evaluation, the natural language is also augmented with external knowledge and then used to reference learned visual concepts.
arXiv Detail & Related papers (2022-04-20T04:47:01Z) - SketchEmbedNet: Learning Novel Concepts by Imitating Drawings [125.45799722437478]
We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
arXiv Detail & Related papers (2020-08-27T16:43:28Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z) - Deep Learning for Free-Hand Sketch: A Survey [159.63186738971953]
Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present.
The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and made sketch-oriented applications increasingly popular.
arXiv Detail & Related papers (2020-01-08T16:23:56Z) - A Gentle Introduction to Deep Learning for Graphs [23.809161531445053]
This work is designed as a tutorial introduction to the field of deep learning for graphs.
It introduces a general formulation of graph representation learning based on a local and iterative approach to structured information processing.
It introduces the basic building blocks that can be combined to design novel and effective neural models for graphs.
arXiv Detail & Related papers (2019-12-29T16:43:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.