Hyperstroke: A Novel High-quality Stroke Representation for Assistive Artistic Drawing
- URL: http://arxiv.org/abs/2408.09348v1
- Date: Sun, 18 Aug 2024 04:05:53 GMT
- Title: Hyperstroke: A Novel High-quality Stroke Representation for Assistive Artistic Drawing
- Authors: Haoyun Qin, Jian Lin, Hanyuan Liu, Xueting Liu, Chengze Li,
- Abstract summary: We introduce hyperstroke, a novel stroke representation designed to capture precise fine stroke details.
We propose to model assistive drawing via a transformer-based architecture, to enable intuitive and user-friendly drawing applications.
- Score: 12.71408421022756
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Assistive drawing aims to facilitate the creative process by providing intelligent guidance to artists. Existing solutions often fail to effectively model intricate stroke details or adequately address the temporal aspects of drawing. We introduce hyperstroke, a novel stroke representation designed to capture precise fine stroke details, including RGB appearance and alpha-channel opacity. Using a Vector Quantization approach, hyperstroke learns compact tokenized representations of strokes from real-life drawing videos of artistic drawing. With hyperstroke, we propose to model assistive drawing via a transformer-based architecture, to enable intuitive and user-friendly drawing applications, which are experimented in our exploratory evaluation.
Related papers
- Emergence of Painting Ability via Recognition-Driven Evolution [49.666177849272856]
We present a model with a stroke branch and a palette branch that together simulate human-like painting.
We quantify the efficiency of visual communication by measuring the recognition accuracy achieved with machine vision.
Experimental results show that our model achieves superior performance in high-level recognition tasks.
arXiv Detail & Related papers (2025-01-09T04:37:31Z) - LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model [8.938617090786494]
We present LineArt, a framework that transfers complex appearance onto detailed design drawings.
It generates high-fidelity appearance while preserving structural accuracy by simulating hierarchical visual cognition.
It requires no precise 3D modeling, physical property specs, or network training, making it more convenient for design tasks.
arXiv Detail & Related papers (2024-12-16T07:54:45Z) - Sketch-Guided Motion Diffusion for Stylized Cinemagraph Synthesis [15.988686454889823]
Sketch2Cinemagraph is a sketch-guided framework that enables the conditional generation of stylized cinemagraphs from freehand sketches.
We propose a novel latent motion diffusion model to estimate the motion field in the fluid regions of the generated landscape images.
arXiv Detail & Related papers (2024-12-01T01:32:59Z) - PatternPortrait: Draw Me Like One of Your Scribbles [2.01243755755303]
This paper introduces a process for generating abstract portrait drawings from pictures.
Their unique style is created by utilizing single freehand pattern sketches as references to generate unique patterns for shading.
The method involves extracting facial and body features from images and transforming them into vector lines.
arXiv Detail & Related papers (2024-01-22T12:33:11Z) - SketchDreamer: Interactive Text-Augmented Creative Sketch Ideation [111.2195741547517]
We present a method to generate controlled sketches using a text-conditioned diffusion model trained on pixel representations of images.
Our objective is to empower non-professional users to create sketches and, through a series of optimisation processes, transform a narrative into a storyboard.
arXiv Detail & Related papers (2023-08-27T19:44:44Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - DeepFacePencil: Creating Face Images from Freehand Sketches [77.00929179469559]
Existing image-to-image translation methods require a large-scale dataset of paired sketches and images for supervision.
We propose DeepFacePencil, an effective tool that is able to generate photo-realistic face images from hand-drawn sketches.
arXiv Detail & Related papers (2020-08-31T03:35:21Z) - B\'ezierSketch: A generative model for scalable vector sketches [132.5223191478268]
We present B'ezierSketch, a novel generative model for fully vector sketches that are automatically scalable and high-resolution.
We first introduce a novel inverse graphics approach to stroke embedding that trains an encoder to embed each stroke to its best fit B'ezier curve.
This enables us to treat sketches as short sequences of paramaterized strokes and thus train a recurrent sketch generator with greater capacity for longer sketches.
arXiv Detail & Related papers (2020-07-04T21:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.