Creative Sketch Generation
- URL: http://arxiv.org/abs/2011.10039v2
- Date: Wed, 3 Mar 2021 20:01:54 GMT
- Title: Creative Sketch Generation
- Authors: Songwei Ge, Vedanuj Goswami, C. Lawrence Zitnick and Devi Parikh
- Abstract summary: We introduce two datasets of creative sketches -- Creative Birds and Creative Creatures -- containing 10k sketches each along with part annotations.
We propose DoodlerGAN -- a part-based Generative Adrial Network (GAN) -- to generate unseen compositions of novel part appearances.
Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches.
- Score: 48.16835161875747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sketching or doodling is a popular creative activity that people engage in.
However, most existing work in automatic sketch understanding or generation has
focused on sketches that are quite mundane. In this work, we introduce two
datasets of creative sketches -- Creative Birds and Creative Creatures --
containing 10k sketches each along with part annotations. We propose DoodlerGAN
-- a part-based Generative Adversarial Network (GAN) -- to generate unseen
compositions of novel part appearances. Quantitative evaluations as well as
human studies demonstrate that sketches generated by our approach are more
creative and of higher quality than existing approaches. In fact, in Creative
Birds, subjects prefer sketches generated by DoodlerGAN over those drawn by
humans! Our code can be found at https://github.com/facebookresearch/DoodlerGAN
and a demo can be found at http://doodlergan.cloudcv.org.
Related papers
- SketchAgent: Language-Driven Sequential Sketch Generation [34.96339247291013]
SketchAgent is a language-driven, sequential sketch generation method.
We present an intuitive sketching language, introduced to the model through in-context examples.
By drawing stroke by stroke, our agent captures the evolving, dynamic qualities intrinsic to sketching.
arXiv Detail & Related papers (2024-11-26T18:32:06Z) - SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - Exploring Latent Dimensions of Crowd-sourced Creativity [0.02294014185517203]
We build our work on the largest AI-based creativity platform, Artbreeder.
We explore the latent dimensions of images generated on this platform and present a novel framework for manipulating images to make them more creative.
arXiv Detail & Related papers (2021-12-13T19:24:52Z) - DoodleFormer: Creative Sketch Drawing with Transformers [68.18953603715514]
Creative sketching or doodling is an expressive activity, where imaginative and previously unseen depictions of everyday visual objects are drawn.
Here, we propose a novel coarse-to-fine two-stage framework, DoodleFormer, that decomposes the creative sketch generation problem into the creation of coarse sketch composition.
To ensure diversity of the generated creative sketches, we introduce a probabilistic coarse sketch decoder.
arXiv Detail & Related papers (2021-12-06T18:59:59Z) - SketchEmbedNet: Learning Novel Concepts by Imitating Drawings [125.45799722437478]
We explore properties of image representations learned by training a model to produce sketches of images.
We show that this generative, class-agnostic model produces informative embeddings of images from novel examples, classes, and even novel datasets in a few-shot setting.
arXiv Detail & Related papers (2020-08-27T16:43:28Z) - Exploring Crowd Co-creation Scenarios for Sketches [49.578304437046384]
We study several human-only collaborative co-creation scenarios.
The goal in each scenario is to create a digital sketch using a simple web interface.
We find that settings in which multiple humans iteratively add strokes and vote on the best additions result in the sketches with highest perceived creativity.
arXiv Detail & Related papers (2020-05-15T02:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.