DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive
Regularizer Learning
- URL: http://arxiv.org/abs/2205.05218v1
- Date: Wed, 11 May 2022 00:34:27 GMT
- Title: DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive
Regularizer Learning
- Authors: Xiaoqin Zhang, Ziwei Huang, Jingjing Zheng, Shuo Wang, and Xianta
Jiang
- Abstract summary: Current state-of-the-art methods ignore category information of objects which is crucial for grasp pattern recognition.
This paper presents a novel dual-branch convolutional neural network (DcnnGrasp) to achieve joint learning of object category classification and grasp pattern recognition.
- Score: 13.08779945306727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of grasp pattern recognition aims to derive the applicable grasp
types of an object according to the visual information. Current
state-of-the-art methods ignore category information of objects which is
crucial for grasp pattern recognition. This paper presents a novel dual-branch
convolutional neural network (DcnnGrasp) to achieve joint learning of object
category classification and grasp pattern recognition. DcnnGrasp takes object
category classification as an auxiliary task to improve the effectiveness of
grasp pattern recognition. Meanwhile, a new loss function called joint
cross-entropy with an adaptive regularizer is derived through maximizing a
posterior, which significantly improves the model performance. Besides, based
on the new loss function, a training strategy is proposed to maximize the
collaborative learning of the two tasks. The experiment was performed on five
household objects datasets including the RGB-D Object dataset, Hit-GPRec
dataset, Amsterdam library of object images (ALOI), Columbia University Image
Library (COIL-100), and MeganePro dataset 1. The experimental results
demonstrated that the proposed method can achieve competitive performance on
grasp pattern recognition with several state-of-the-art methods. Specifically,
our method even outperformed the second-best one by nearly 15% in terms of
global accuracy for the case of testing a novel object on the RGB-D Object
dataset.
Related papers
- Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning [42.14439854721613]
We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios.
Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique.
arXiv Detail & Related papers (2024-05-17T19:49:02Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Interpolation-based semi-supervised learning for object detection [44.37685664440632]
We propose an Interpolation-based Semi-supervised learning method for object detection.
The proposed losses dramatically improve the performance of semi-supervised learning as well as supervised learning.
arXiv Detail & Related papers (2020-06-03T10:53:44Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - Adaptive Object Detection with Dual Multi-Label Prediction [78.69064917947624]
We propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection.
The model exploits multi-label prediction to reveal the object category information in each image.
We introduce a prediction consistency regularization mechanism to assist object detection.
arXiv Detail & Related papers (2020-03-29T04:23:22Z) - Pairwise Similarity Knowledge Transfer for Weakly Supervised Object
Localization [53.99850033746663]
We study the problem of learning localization model on target classes with weakly supervised image labels.
In this work, we argue that learning only an objectness function is a weak form of knowledge transfer.
Experiments on the COCO and ILSVRC 2013 detection datasets show that the performance of the localization model improves significantly with the inclusion of pairwise similarity function.
arXiv Detail & Related papers (2020-03-18T17:53:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.