Model-free Grasping with Multi-Suction Cup Grippers for Robotic Bin
Picking
- URL: http://arxiv.org/abs/2307.16488v1
- Date: Mon, 31 Jul 2023 08:33:23 GMT
- Title: Model-free Grasping with Multi-Suction Cup Grippers for Robotic Bin
Picking
- Authors: Philipp Schillinger, Miroslav Gabriel, Alexander Kuss, Hanna Ziesche,
Ngo Anh Vien
- Abstract summary: We present a novel method for model-free prediction of grasp poses for suction grippers with multiple suction cups.
Our approach is agnostic to the design of the gripper and does not require gripper-specific training data.
- Score: 63.15595970667581
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a novel method for model-free prediction of grasp poses
for suction grippers with multiple suction cups. Our approach is agnostic to
the design of the gripper and does not require gripper-specific training data.
In particular, we propose a two-step approach, where first, a neural network
predicts pixel-wise grasp quality for an input image to indicate areas that are
generally graspable. Second, an optimization step determines the optimal
gripper selection and corresponding grasp poses based on configured gripper
layouts and activation schemes. In addition, we introduce a method for
automated labeling for supervised training of the grasp quality network.
Experimental evaluations on a real-world industrial application with bin
picking scenes of varying difficulty demonstrate the effectiveness of our
method.
Related papers
- Pre-Trained Vision-Language Models as Partial Annotators [40.89255396643592]
Pre-trained vision-language models learn massive data to model unified representations of images and natural languages.
In this paper, we investigate a novel "pre-trained annotating - weakly-supervised learning" paradigm for pre-trained model application and experiment on image classification tasks.
arXiv Detail & Related papers (2024-05-23T17:17:27Z) - Unsupervised textile defect detection using convolutional neural
networks [0.0]
We propose a novel motif-based approach for unsupervised textile anomaly detection.
It combines the benefits of traditional convolutional neural networks with those of an unsupervised learning paradigm.
We demonstrate the effectiveness of our approach on the Patterned Fabrics benchmark dataset.
arXiv Detail & Related papers (2023-11-30T22:08:06Z) - Label, Verify, Correct: A Simple Few Shot Object Detection Method [93.84801062680786]
We introduce a simple pseudo-labelling method to source high-quality pseudo-annotations from a training set.
We present two novel methods to improve the precision of the pseudo-labelling process.
Our method achieves state-of-the-art or second-best performance compared to existing approaches.
arXiv Detail & Related papers (2021-12-10T18:59:06Z) - Partner-Assisted Learning for Few-Shot Image Classification [54.66864961784989]
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation.
In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples.
We propose a two-stage training scheme, which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance.
arXiv Detail & Related papers (2021-09-15T22:46:19Z) - THAT: Two Head Adversarial Training for Improving Robustness at Scale [126.06873298511425]
We propose Two Head Adversarial Training (THAT), a two-stream adversarial learning network that is designed to handle the large-scale many-class ImageNet dataset.
The proposed method trains a network with two heads and two loss functions; one to minimize feature-space domain shift between natural and adversarial images, and one to promote high classification accuracy.
arXiv Detail & Related papers (2021-03-25T05:32:38Z) - SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping [47.221326169627666]
We propose a new physical model to analytically evaluate seal formation and wrench resistance of a suction grasping.
A two-step methodology is adopted to generate annotations on a large-scale dataset collected in real-world cluttered scenarios.
A standard online evaluation system is proposed to evaluate suction poses in continuous operation space.
arXiv Detail & Related papers (2021-03-23T05:02:52Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Semi-supervised Facial Action Unit Intensity Estimation with Contrastive
Learning [54.90704746573636]
Our method does not require to manually select key frames, and produces state-of-the-art results with as little as $2%$ of annotated frames.
We experimentally validate that our method outperforms existing methods when working with as little as $2%$ of randomly chosen data.
arXiv Detail & Related papers (2020-11-03T17:35:57Z) - Leveraging the Feature Distribution in Transfer-based Few-Shot Learning [2.922007656878633]
Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples.
We propose a novel transfer-based method that builds on two successive steps: 1) preprocessing the feature vectors so that they become closer to Gaussian-like distributions, and 2) leveraging this preprocessing using an optimal-transport inspired algorithm.
We prove the ability of the proposed methodology to achieve state-of-the-art accuracy with various datasets, backbone architectures and few-shot settings.
arXiv Detail & Related papers (2020-06-06T07:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.