Equipping Sketch Patches with Context-Aware Positional Encoding for Graphic Sketch Representation
- URL: http://arxiv.org/abs/2403.17525v1
- Date: Tue, 26 Mar 2024 09:26:12 GMT
- Title: Equipping Sketch Patches with Context-Aware Positional Encoding for Graphic Sketch Representation
- Authors: Sicong Zang, Zhijun Fang,
- Abstract summary: We propose a variant-drawing-protected method for learning graphic sketch representation.
Instead of injecting sketch drawings into graph edges, we embed these sequential information into graph nodes only.
Experimental results indicate that our method significantly improves sketch healing and controllable sketch synthesis.
- Score: 4.961362040453441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The drawing order of a sketch records how it is created stroke-by-stroke by a human being. For graphic sketch representation learning, recent studies have injected sketch drawing orders into graph edge construction by linking each patch to another in accordance to a temporal-based nearest neighboring strategy. However, such constructed graph edges may be unreliable, since a sketch could have variants of drawings. In this paper, we propose a variant-drawing-protected method by equipping sketch patches with context-aware positional encoding (PE) to make better use of drawing orders for learning graphic sketch representation. Instead of injecting sketch drawings into graph edges, we embed these sequential information into graph nodes only. More specifically, each patch embedding is equipped with a sinusoidal absolute PE to highlight the sequential position in the drawing order. And its neighboring patches, ranked by the values of self-attention scores between patch embeddings, are equipped with learnable relative PEs to restore the contextual positions within a neighborhood. During message aggregation via graph convolutional networks, a node receives both semantic contents from patch embeddings and contextual patterns from PEs by its neighbors, arriving at drawing-order-enhanced sketch representations. Experimental results indicate that our method significantly improves sketch healing and controllable sketch synthesis.
Related papers
- Linking Sketch Patches by Learning Synonymous Proximity for Graphic
Sketch Representation [8.19063619210761]
We propose an order-invariant, semantics-aware method for graphic sketch representations.
The cropped sketch patches are linked according to their global semantics or local geometric shapes, namely the synonymous proximity.
We show that our method significantly improves the performance on both controllable sketch synthesis and sketch healing.
arXiv Detail & Related papers (2022-11-30T09:28:15Z) - Abstracting Sketches through Simple Primitives [53.04827416243121]
Humans show high-level of abstraction capabilities in games that require quickly communicating object information.
We propose the Primitive-based Sketch Abstraction task where the goal is to represent sketches using a fixed set of drawing primitives.
Our Primitive-Matching Network (PMN), learns interpretable abstractions of a sketch in a self supervised manner.
arXiv Detail & Related papers (2022-07-27T14:32:39Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - SSR-GNNs: Stroke-based Sketch Representation with Graph Neural Networks [34.759306840182205]
This paper investigates a graph representation for sketches, where the information of strokes, i.e., parts of a sketch, are encoded on vertices and information of inter-stroke on edges.
The resultant graph representation facilitates the training of a Graph Neural Networks for classification tasks.
The proposed representation enables generation of novel sketches that are structurally similar to while separable from the existing dataset.
arXiv Detail & Related papers (2022-04-27T19:18:01Z) - Learning to generate line drawings that convey geometry and semantics [22.932131011984513]
This paper presents an unpaired method for creating line drawings from photographs.
We observe that line drawings are encodings of scene information and seek to convey 3D shape and semantic meaning.
We introduce a geometry loss which predicts depth information from the image features of a line drawing, and a semantic loss which matches the CLIP features of a line drawing with its corresponding photograph.
arXiv Detail & Related papers (2022-03-23T19:27:41Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.