GraspCaps: A Capsule Network Approach for Familiar 6DoF Object Grasping
- URL: http://arxiv.org/abs/2210.03628v2
- Date: Wed, 29 Nov 2023 19:27:58 GMT
- Title: GraspCaps: A Capsule Network Approach for Familiar 6DoF Object Grasping
- Authors: Tomas van der Velde, Hamed Ayoobi, Hamidreza Kasaei
- Abstract summary: The paper presents GraspCaps, a novel architecture for generating per-point 6D grasp configurations for familiar objects.
In addition, the paper also presents a method for generating a large object-grasping dataset using simulated annealing.
The experimental results showed that the overall object-grasping performance of the proposed approach is significantly better than the selected baseline.
- Score: 6.72184534513047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As robots become more widely available outside industrial settings, the need
for reliable object grasping and manipulation is increasing. In such
environments, robots must be able to grasp and manipulate novel objects in
various situations. This paper presents GraspCaps, a novel architecture based
on Capsule Networks for generating per-point 6D grasp configurations for
familiar objects. GraspCaps extracts a rich feature vector of the objects
present in the point cloud input, which is then used to generate per-point
grasp vectors. This approach allows the network to learn specific grasping
strategies for each object category. In addition to GraspCaps, the paper also
presents a method for generating a large object-grasping dataset using
simulated annealing. The obtained dataset is then used to train the GraspCaps
network. Through extensive experiments, we evaluate the performance of the
proposed approach, particularly in terms of the success rate of grasping
familiar objects in challenging real and simulated scenarios. The experimental
results showed that the overall object-grasping performance of the proposed
approach is significantly better than the selected baseline. This superior
performance highlights the effectiveness of the GraspCaps in achieving
successful object grasping across various scenarios.
Related papers
- RPCANet++: Deep Interpretable Robust PCA for Sparse Object Segmentation [51.37553739930992]
RPCANet++ is a sparse object segmentation framework that fuses the interpretability of RPCA with efficient deep architectures.<n>Our approach unfolds a relaxed RPCA model into a structured network comprising a Background Approximation Module (BAM), an Object Extraction Module (OEM) and an Image Restoration Module (IRM)<n>Experiments on diverse datasets demonstrate that RPCANet++ achieves state-of-the-art performance under various imaging scenarios.
arXiv Detail & Related papers (2025-08-06T08:19:37Z) - Purifying, Labeling, and Utilizing: A High-Quality Pipeline for Small Object Detection [83.90563802153707]
PLUSNet is a high-quality Small object detection framework.
It comprises three components: the Hierarchical Feature (HFP) framework for purifying upstream features, the Multiple Criteria Label Assignment (MCLA) for improving the quality of midstream training samples, and the Frequency Decoupled Head (FDHead) for more effectively exploiting information to accomplish downstream tasks.
arXiv Detail & Related papers (2025-04-29T10:11:03Z) - LAC-Net: Linear-Fusion Attention-Guided Convolutional Network for Accurate Robotic Grasping Under the Occlusion [79.22197702626542]
This paper introduces a framework that explores amodal segmentation for robotic grasping in cluttered scenes.
We propose a Linear-fusion Attention-guided Convolutional Network (LAC-Net)
The results on different datasets show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-08-06T14:50:48Z) - Hierarchical Object-Centric Learning with Capsule Networks [0.0]
Capsule networks (CapsNets) were introduced to address convolutional neural networks limitations.
This thesis investigates the intriguing aspects of CapsNets and focuses on three key questions to unlock their full potential.
arXiv Detail & Related papers (2024-05-30T09:10:33Z) - AGILE: Approach-based Grasp Inference Learned from Element Decomposition [2.812395851874055]
Humans can grasp objects by taking into account hand-object positioning information.
This work proposes a method to enable a robot manipulator to learn the same, grasping objects in the most optimal way.
arXiv Detail & Related papers (2024-02-02T10:47:08Z) - GraNet: A Multi-Level Graph Network for 6-DoF Grasp Pose Generation in
Cluttered Scenes [0.5755004576310334]
GraNet is a graph-based grasp pose generation framework that translates a point cloud scene into multi-level graphs.
Our pipeline can thus characterize the spatial distribution of grasps in cluttered scenes, leading to a higher rate of effective grasping.
Our method achieves state-of-the-art performance on the large-scale GraspNet-1Billion benchmark, especially in grasping unseen objects.
arXiv Detail & Related papers (2023-12-06T08:36:29Z) - Aligning Pretraining for Detection via Object-Level Contrastive Learning [57.845286545603415]
Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning.
We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task.
Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection.
arXiv Detail & Related papers (2021-06-04T17:59:52Z) - Deformable Capsules for Object Detection [3.702343116848637]
We introduce a new family of capsule networks, deformable capsules (textitDeformCaps), to address a very important problem in computer vision: object detection.
We demonstrate that the proposed methods efficiently scale up to create the first-ever capsule network for object detection in the literature.
arXiv Detail & Related papers (2021-04-11T15:36:30Z) - Few-shot Weakly-Supervised Object Detection via Directional Statistics [55.97230224399744]
We propose a probabilistic multiple instance learning approach for few-shot Common Object Localization (COL) and few-shot Weakly Supervised Object Detection (WSOD)
Our model simultaneously learns the distribution of the novel objects and localizes them via expectation-maximization steps.
Our experiments show that the proposed method, despite being simple, outperforms strong baselines in few-shot COL and WSOD, as well as large-scale WSOD tasks.
arXiv Detail & Related papers (2021-03-25T22:34:16Z) - Where2Act: From Pixels to Actions for Articulated 3D Objects [54.19638599501286]
We extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts.
We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation.
Our learned models even transfer to real-world data.
arXiv Detail & Related papers (2021-01-07T18:56:38Z) - Wasserstein Routed Capsule Networks [90.16542156512405]
We propose a new parameter efficient capsule architecture, that is able to tackle complex tasks.
We show that our network is able to substantially outperform other capsule approaches by over 1.2 % on CIFAR-10.
arXiv Detail & Related papers (2020-07-22T14:38:05Z) - MOPS-Net: A Matrix Optimization-driven Network forTask-Oriented 3D Point
Cloud Downsampling [86.42733428762513]
MOPS-Net is a novel interpretable deep learning-based method for matrix optimization.
We show that MOPS-Net can achieve favorable performance against state-of-the-art deep learning-based methods over various tasks.
arXiv Detail & Related papers (2020-05-01T14:01:53Z) - Extended Target Tracking and Classification Using Neural Networks [1.2891210250935146]
State-of-the-art ETT algorithms can track the dynamic behaviour of objects and learn their shapes simultaneously.
We propose to use a naively deep neural network, which consists of one input, two hidden and one output layers, to classify dynamic objects regarding their shape estimates.
arXiv Detail & Related papers (2020-02-13T12:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.