SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation
- URL: http://arxiv.org/abs/2308.14191v1
- Date: Sun, 27 Aug 2023 19:44:44 GMT
- Title: SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation
- Authors: Zhiyu Qu and Tao Xiang and Yi-Zhe Song
- Abstract summary: We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
- Score: 111.2195741547517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence Generated Content (AIGC) has shown remarkable
progress in generating realistic images. However, in this paper, we take a step
"backward" and address AIGC for the most rudimentary visual modality of human
sketches. Our objective is on the creative nature of sketches, and that
creative sketching should take the form of an interactive process. We further
enable text to drive the sketch ideation process, allowing creativity to be
freely defined, while simultaneously tackling the challenge of "I can't
sketch". We present a method to generate controlled sketches using a
text-conditioned diffusion model trained on pixel representations of images.
Our proposed approach, referred to as SketchDreamer, integrates a
differentiable rasteriser of Bezier curves that optimises an initial input to
distil abstract semantic knowledge from a pretrained diffusion model. We
utilise Score Distillation Sampling to learn a sketch that aligns with a given
caption, which importantly enable both text and sketch to interact with the
ideation process. Our objective is to empower non-professional users to create
sketches and, through a series of optimisation processes, transform a narrative
into a storyboard by expanding the text prompt while making minor adjustments
to the sketch input. Through this work, we hope to aspire the way we create
visual content, democratise the creative process, and inspire further research
in enhancing human creativity in AIGC. The code is available at
\url{https://github.com/WinKawaks/SketchDreamer}.
Related papers
- SketchAgent: Language-Driven Sequential Sketch Generation [34.96339247291013]
SketchAgent is a language-driven, sequential sketch generation method.
We present an intuitive sketching language, introduced to the model through in-context examples.
By drawing stroke by stroke, our agent captures the evolving, dynamic qualities intrinsic to sketching.
arXiv Detail & Related papers (2024-11-26T18:32:06Z) - It's All About Your Sketch: Democratising Sketch Control in Diffusion Models [114.73766136068357]
This paper unravels the potential of sketches for diffusion models, addressing the deceptive promise of direct sketch control in generative AI.
We importantly democratise the process, enabling amateur sketches to generate precise images, living up to the commitment of "what you sketch is what you get"
arXiv Detail & Related papers (2024-03-12T01:05:25Z) - DiffSketching: Sketch Control Image Synthesis with Diffusion Models [10.172753521953386]
Deep learning models for sketch-to-image synthesis need to overcome the distorted input sketch without visual details.
Our model matches sketches through the cross domain constraints, and uses a classifier to guide the image synthesis more accurately.
Our model can beat GAN-based method in terms of generation quality and human evaluation, and does not rely on massive sketch-image datasets.
arXiv Detail & Related papers (2023-05-30T07:59:23Z) - Picture that Sketch: Photorealistic Image Generation from Abstract
Sketches [109.69076457732632]
Given an abstract, deformed, ordinary sketch from untrained amateurs like you and me, this paper turns it into a photorealistic image.
We do not dictate an edgemap-like sketch to start with, but aim to work with abstract free-hand human sketches.
In doing so, we essentially democratise the sketch-to-photo pipeline, "picturing" a sketch regardless of how good you sketch.
arXiv Detail & Related papers (2023-03-20T14:49:03Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - DoodleFormer: Creative Sketch Drawing with Transformers [68.18953603715514]
Creative sketching or doodling is an expressive activity, where imaginative and previously unseen depictions of everyday visual objects are drawn.
Here, we propose a novel coarse-to-fine two-stage framework, DoodleFormer, that decomposes the creative sketch generation problem into the creation of coarse sketch composition.
To ensure diversity of the generated creative sketches, we introduce a probabilistic coarse sketch decoder.
arXiv Detail & Related papers (2021-12-06T18:59:59Z) - Creative Sketch Generation [48.16835161875747]
We introduce two datasets of creative sketches -- Creative Birds and Creative Creatures -- containing 10k sketches each along with part annotations.
We propose DoodlerGAN -- a part-based Generative Adrial Network (GAN) -- to generate unseen compositions of novel part appearances.
Quantitative evaluations as well as human studies demonstrate that sketches generated by our approach are more creative and of higher quality than existing approaches.
arXiv Detail & Related papers (2020-11-19T18:57:00Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.