SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence
- URL: http://arxiv.org/abs/2001.05744v3
- Date: Mon, 10 Aug 2020 23:18:16 GMT
- Title: SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence
- Authors: Deng Yu, Lei Li, Youyi Zheng, Manfred Lau, Yi-Zhe Song, Chiew-Lan Tai,
Hongbo Fu
- Abstract summary: We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
- Score: 68.63311821718416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of multi-view sketch correspondence,
where we take as input multiple freehand sketches with different views of the
same object and predict as output the semantic correspondence among the
sketches. This problem is challenging since the visual features of
corresponding points at different views can be very different. To this end, we
take a deep learning approach and learn a novel local sketch descriptor from
data. We contribute a training dataset by generating the pixel-level
correspondence for the multi-view line drawings synthesized from 3D shapes. To
handle the sparsity and ambiguity of sketches, we design a novel multi-branch
neural network that integrates a patch-based representation and a multi-scale
strategy to learn the pixel-level correspondence among multi-view sketches. We
demonstrate the effectiveness of our proposed approach with extensive
experiments on hand-drawn sketches and multi-view line drawings rendered from
multiple 3D shape datasets.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - DiffFaceSketch: High-Fidelity Face Image Synthesis with Sketch-Guided
Latent Diffusion Model [8.1818090854822]
We introduce a Sketch-Guided Latent Diffusion Model (SGLDM), an LDM-based network architect trained on a paired sketch-face dataset.
SGLDM can synthesize high-quality face images with different expressions, facial accessories, and hairstyles from various sketches with different abstraction levels.
arXiv Detail & Related papers (2023-02-14T08:51:47Z) - DifferSketching: How Differently Do People Sketch 3D Objects? [78.44544977215918]
Multiple sketch datasets have been proposed to understand how people draw 3D objects.
These datasets are often of small scale and cover a small set of objects or categories.
We analyze the collected data at three levels, i.e., sketch-level, stroke-level, and pixel-level, under both spatial and temporal characteristics.
arXiv Detail & Related papers (2022-09-19T06:52:18Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - Sketch-BERT: Learning Sketch Bidirectional Encoder Representation from
Transformers by Self-supervised Learning of Sketch Gestalt [125.17887147597567]
We present a model of learning Sketch BiBERT Representation from Transformer (Sketch-BERT)
We generalize BERT to sketch domain, with the novel proposed components and pre-training algorithms.
We show that the learned representation of Sketch-BERT can help and improve the performance of the downstream tasks of sketch recognition, sketch retrieval, and sketch gestalt.
arXiv Detail & Related papers (2020-05-19T01:35:44Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.