I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches
- URL: http://arxiv.org/abs/2205.04026v1
- Date: Mon, 9 May 2022 04:23:36 GMT
- Title: I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches
- Authors: Haitao Lin, Chilam Cheang, Yanwei Fu, Xiangyang Xue
- Abstract summary: We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
- Score: 74.63313641583602
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we are interested in the problem of generating target grasps
by understanding freehand sketches. The sketch is useful for the persons who
cannot formulate language and the cases where a textual description is not
available on the fly. However, very few works are aware of the usability of
this novel interactive way between humans and robots. To this end, we propose a
method to generate a potential grasp configuration relevant to the
sketch-depicted objects. Due to the inherent ambiguity of sketches with
abstract details, we take the advantage of the graph by incorporating the
structure of the sketch to enhance the representation ability. This
graph-represented sketch is further validated to improve the generalization of
the network, capable of learning the sketch-queried grasp detection by using a
small collection (around 100 samples) of hand-drawn sketches. Additionally, our
model is trained and tested in an end-to-end manner which is easy to be
implemented in real-world applications. Experiments on the multi-object VMRD
and GraspNet-1Billion datasets demonstrate the good generalization of the
proposed method. The physical robot experiments confirm the utility of our
method in object-cluttered scenes.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - It's All About Your Sketch: Democratising Sketch Control in Diffusion Models [114.73766136068357]
This paper unravels the potential of sketches for diffusion models, addressing the deceptive promise of direct sketch control in generative AI.
We importantly democratise the process, enabling amateur sketches to generate precise images, living up to the commitment of "what you sketch is what you get"
arXiv Detail & Related papers (2024-03-12T01:05:25Z) - Towards Interactive Image Inpainting via Sketch Refinement [13.34066589008464]
We propose a two-stage image inpainting method termed SketchRefiner.
In the first stage, we propose using a cross-correlation loss function to robustly calibrate and refine the user-provided sketches.
In the second stage, we learn to extract informative features from the abstracted sketches in the feature space and modulate the inpainting process.
arXiv Detail & Related papers (2023-06-01T07:15:54Z) - Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings [99.9788496281408]
We study how sketches can be used as a weak label to detect salient objects present in an image.
To accomplish this, we introduce a photo-to-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo.
Tests prove our hypothesis and delineate how our sketch-based saliency detection model gives a competitive performance compared to the state-of-the-art.
arXiv Detail & Related papers (2023-03-20T23:46:46Z) - Abstracting Sketches through Simple Primitives [53.04827416243121]
Humans show high-level of abstraction capabilities in games that require quickly communicating object information.
We propose the Primitive-based Sketch Abstraction task where the goal is to represent sketches using a fixed set of drawing primitives.
Our Primitive-Matching Network (PMN), learns interpretable abstractions of a sketch in a self supervised manner.
arXiv Detail & Related papers (2022-07-27T14:32:39Z) - Deep Self-Supervised Representation Learning for Free-Hand Sketch [51.101565480583304]
We tackle the problem of self-supervised representation learning for free-hand sketches.
Key for the success of our self-supervised learning paradigm lies with our sketch-specific designs.
We show that the proposed approach outperforms the state-of-the-art unsupervised representation learning methods.
arXiv Detail & Related papers (2020-02-03T16:28:29Z) - SketchDesc: Learning Local Sketch Descriptors for Multi-view
Correspondence [68.63311821718416]
We study the problem of multi-view sketch correspondence, where we take as input multiple freehand sketches with different views of the same object.
This problem is challenging since the visual features of corresponding points at different views can be very different.
We take a deep learning approach and learn a novel local sketch descriptor from data.
arXiv Detail & Related papers (2020-01-16T11:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.