Linking Sketch Patches by Learning Synonymous Proximity for Graphic
Sketch Representation
- URL: http://arxiv.org/abs/2211.16841v1
- Date: Wed, 30 Nov 2022 09:28:15 GMT
- Title: Linking Sketch Patches by Learning Synonymous Proximity for Graphic
Sketch Representation
- Authors: Sicong Zang, Shikui Tu, Lei Xu
- Abstract summary: We propose an order-invariant, semantics-aware method for graphic sketch representations.
The cropped sketch patches are linked according to their global semantics or local geometric shapes, namely the synonymous proximity.
We show that our method significantly improves the performance on both controllable sketch synthesis and sketch healing.
- Score: 8.19063619210761
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphic sketch representations are effective for representing sketches.
Existing methods take the patches cropped from sketches as the graph nodes, and
construct the edges based on sketch's drawing order or Euclidean distances on
the canvas. However, the drawing order of a sketch may not be unique, while the
patches from semantically related parts of a sketch may be far away from each
other on the canvas. In this paper, we propose an order-invariant,
semantics-aware method for graphic sketch representations. The cropped sketch
patches are linked according to their global semantics or local geometric
shapes, namely the synonymous proximity, by computing the cosine similarity
between the captured patch embeddings. Such constructed edges are learnable to
adapt to the variation of sketch drawings, which enable the message passing
among synonymous patches. Aggregating the messages from synonymous patches by
graph convolutional networks plays a role of denoising, which is beneficial to
produce robust patch embeddings and accurate sketch representations.
Furthermore, we enforce a clustering constraint over the embeddings jointly
with the network learning. The synonymous patches are self-organized as compact
clusters, and their embeddings are guided to move towards their assigned
cluster centroids. It raises the accuracy of the computed synonymous proximity.
Experimental results show that our method significantly improves the
performance on both controllable sketch synthesis and sketch healing.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - Equipping Sketch Patches with Context-Aware Positional Encoding for Graphic Sketch Representation [4.961362040453441]
We propose a variant-drawing-protected method for learning graphic sketch representation.
Instead of injecting sketch drawings into graph edges, we embed these sequential information into graph nodes only.
Experimental results indicate that our method significantly improves sketch healing and controllable sketch synthesis.
arXiv Detail & Related papers (2024-03-26T09:26:12Z) - Abstracting Sketches through Simple Primitives [53.04827416243121]
Humans show high-level of abstraction capabilities in games that require quickly communicating object information.
We propose the Primitive-based Sketch Abstraction task where the goal is to represent sketches using a fixed set of drawing primitives.
Our Primitive-Matching Network (PMN), learns interpretable abstractions of a sketch in a self supervised manner.
arXiv Detail & Related papers (2022-07-27T14:32:39Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - SSR-GNNs: Stroke-based Sketch Representation with Graph Neural Networks [34.759306840182205]
This paper investigates a graph representation for sketches, where the information of strokes, i.e., parts of a sketch, are encoded on vertices and information of inter-stroke on edges.
The resultant graph representation facilitates the training of a Graph Neural Networks for classification tasks.
The proposed representation enables generation of novel sketches that are structurally similar to while separable from the existing dataset.
arXiv Detail & Related papers (2022-04-27T19:18:01Z) - One Sketch for All: One-Shot Personalized Sketch Segmentation [84.45203849671003]
We present the first one-shot personalized sketch segmentation method.
We aim to segment all sketches belonging to the same category with a single sketch with a given part annotation.
We preserve the parts semantics embedded in the exemplar, and we are robust to input style and abstraction.
arXiv Detail & Related papers (2021-12-20T20:10:44Z) - Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis [42.83974176146334]
We explore the open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label.
It is challenging due to the lack of training supervision and the large geometry distortion between the freehand sketch and photo domains.
We propose a framework that jointly learns sketch-to-photo and photo-to-sketch generation.
arXiv Detail & Related papers (2021-04-12T17:58:46Z) - Sketch2Mesh: Reconstructing and Editing 3D Shapes from Sketches [65.96417928860039]
We use an encoder/decoder architecture for the sketch to mesh translation.
We will show that this approach is easy to deploy, robust to style changes, and effective.
arXiv Detail & Related papers (2021-04-01T14:10:59Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z) - Deep Plastic Surgery: Robust and Controllable Image Editing with
Human-Drawn Sketches [133.01690754567252]
Sketch-based image editing aims to synthesize and modify photos based on the structural information provided by the human-drawn sketches.
Deep Plastic Surgery is a novel, robust and controllable image editing framework that allows users to interactively edit images using hand-drawn sketch inputs.
arXiv Detail & Related papers (2020-01-09T08:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.