SketchLattice: Latticed Representation for Sketch Manipulation
- URL: http://arxiv.org/abs/2108.11636v1
- Date: Thu, 26 Aug 2021 08:02:21 GMT
- Title: SketchLattice: Latticed Representation for Sketch Manipulation
- Authors: Yonggang Qi, Guoyao Su, Pinaki Nath Chowdhury, Mingkang Li, Yi-Zhe
Song
- Abstract summary: Key challenge in designing a sketch representation lies with handling the abstract and iconic nature of sketches.
We propose a lattice structured sketch representation that not only removes the bottleneck of requiring vector data but also preserves the structural cues that vector data provides.
Our lattice representation could be effectively encoded using a graph model, that uses significantly fewer model parameters (13.5 times lesser) than existing state-of-the-art.
- Score: 30.092468954557468
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The key challenge in designing a sketch representation lies with handling the
abstract and iconic nature of sketches. Existing work predominantly utilizes
either, (i) a pixelative format that treats sketches as natural images
employing off-the-shelf CNN-based networks, or (ii) an elaborately designed
vector format that leverages the structural information of drawing orders using
sequential RNN-based methods. While the pixelative format lacks intuitive
exploitation of structural cues, sketches in vector format are absent in most
cases limiting their practical usage. Hence, in this paper, we propose a
lattice structured sketch representation that not only removes the bottleneck
of requiring vector data but also preserves the structural cues that vector
data provides. Essentially, sketch lattice is a set of points sampled from the
pixelative format of the sketch using a lattice graph. We show that our lattice
structure is particularly amenable to structural changes that largely benefits
sketch abstraction modeling for generation tasks. Our lattice representation
could be effectively encoded using a graph model, that uses significantly fewer
model parameters (13.5 times lesser) than existing state-of-the-art. Extensive
experiments demonstrate the effectiveness of sketch lattice for sketch
manipulation, including sketch healing and image-to-sketch synthesis.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - SketchFFusion: Sketch-guided image editing with diffusion model [25.63913085329606]
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user.
We propose a sketch generation scheme that can preserve the main contours of an image and closely adhere to the actual sketch style drawn by the user.
arXiv Detail & Related papers (2023-04-06T15:54:18Z) - Abstracting Sketches through Simple Primitives [53.04827416243121]
Humans show high-level of abstraction capabilities in games that require quickly communicating object information.
We propose the Primitive-based Sketch Abstraction task where the goal is to represent sketches using a fixed set of drawing primitives.
Our Primitive-Matching Network (PMN), learns interpretable abstractions of a sketch in a self supervised manner.
arXiv Detail & Related papers (2022-07-27T14:32:39Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - CLIPasso: Semantically-Aware Object Sketching [34.53644912236454]
We present an object sketching method that can achieve different levels of abstraction, guided by geometric and semantic simplifications.
We define a sketch as a set of B'ezier curves and use a differentiizer to optimize the parameters of the curves directly with respect to a CLIP-based perceptual loss.
arXiv Detail & Related papers (2022-02-11T18:35:25Z) - CoSE: Compositional Stroke Embeddings [52.529172734044664]
We present a generative model for complex free-form structures such as stroke-based drawing tasks.
Our approach is suitable for interactive use cases such as auto-completing diagrams.
arXiv Detail & Related papers (2020-06-17T15:22:54Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z) - Sketchformer: Transformer-based Representation for Sketched Structure [12.448155157592895]
Sketchformer is a transformer-based representation for encoding free-hand sketches input in a vector form.
We report several variants exploring continuous and tokenized input representations, and contrast their performance.
Our learned embedding, driven by a dictionary learning tokenization scheme, yields state of the art performance in classification and image retrieval tasks.
arXiv Detail & Related papers (2020-02-24T17:11:53Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.