Hyperstroke: A Novel High-quality Stroke Representation for Assistive Artistic Drawing
- URL: http://arxiv.org/abs/2408.09348v1
- Date: Sun, 18 Aug 2024 04:05:53 GMT
- Title: Hyperstroke: A Novel High-quality Stroke Representation for Assistive Artistic Drawing
- Authors: Haoyun Qin, Jian Lin, Hanyuan Liu, Xueting Liu, Chengze Li,
- Abstract summary: We introduce hyperstroke, a novel stroke representation designed to capture precise fine stroke details.
We propose to model assistive drawing via a transformer-based architecture, to enable intuitive and user-friendly drawing applications.
- Score: 12.71408421022756
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Assistive drawing aims to facilitate the creative process by providing intelligent guidance to artists. Existing solutions often fail to effectively model intricate stroke details or adequately address the temporal aspects of drawing. We introduce hyperstroke, a novel stroke representation designed to capture precise fine stroke details, including RGB appearance and alpha-channel opacity. Using a Vector Quantization approach, hyperstroke learns compact tokenized representations of strokes from real-life drawing videos of artistic drawing. With hyperstroke, we propose to model assistive drawing via a transformer-based architecture, to enable intuitive and user-friendly drawing applications, which are experimented in our exploratory evaluation.
Related papers
- ARtVista: Gateway To Empower Anyone Into Artist [14.700883382465452]
We propose ARtVista - a novel system integrating AR and generative AI technologies.
ARtVista recommends reference images aligned with users' abstract ideas and generates sketches for users to draw.
We perform a pilot study and reveal positive feedback on its usability.
arXiv Detail & Related papers (2024-03-13T18:00:57Z) - PatternPortrait: Draw Me Like One of Your Scribbles [2.01243755755303]
This paper introduces a process for generating abstract portrait drawings from pictures.
Their unique style is created by utilizing single freehand pattern sketches as references to generate unique patterns for shading.
The method involves extracting facial and body features from images and transforming them into vector lines.
arXiv Detail & Related papers (2024-01-22T12:33:11Z) - SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - B\'ezierSketch: A generative model for scalable vector sketches [132.5223191478268]
We present B'ezierSketch, a novel generative model for fully vector sketches that are automatically scalable and high-resolution.
We first introduce a novel inverse graphics approach to stroke embedding that trains an encoder to embed each stroke to its best fit B'ezier curve.
This enables us to treat sketches as short sequences of paramaterized strokes and thus train a recurrent sketch generator with greater capacity for longer sketches.
arXiv Detail & Related papers (2020-07-04T21:30:52Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.