Learning suction graspability considering grasp quality and robot
reachability for bin-picking
- URL: http://arxiv.org/abs/2111.02571v1
- Date: Thu, 4 Nov 2021 00:55:42 GMT
- Title: Learning suction graspability considering grasp quality and robot
reachability for bin-picking
- Authors: Ping Jiang, Junji Oaki, Yoshiyuki Ishihara, Junichiro Ooga, Haifeng
Han, Atsushi Sugahara, Seiji Tokura, Haruna Eto, Kazuma Komoda, and Akihito
Ogawa
- Abstract summary: We propose an intuitive geometric analytic-based grasp quality evaluation metric.
We further incorporate a reachability evaluation metric.
Experiment results show that our intuitive grasp quality evaluation metric is competitive with a physically-inspired metric.
- Score: 4.317666242093779
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has been widely used for inferring robust grasps. Although
human-labeled RGB-D datasets were initially used to learn grasp configurations,
preparation of this kind of large dataset is expensive. To address this
problem, images were generated by a physical simulator, and a physically
inspired model (e.g., a contact model between a suction vacuum cup and object)
was used as a grasp quality evaluation metric to annotate the synthesized
images. However, this kind of contact model is complicated and requires
parameter identification by experiments to ensure real world performance. In
addition, previous studies have not considered manipulator reachability such as
when a grasp configuration with high grasp quality is unable to reach the
target due to collisions or the physical limitations of the robot. In this
study, we propose an intuitive geometric analytic-based grasp quality
evaluation metric. We further incorporate a reachability evaluation metric. We
annotate the pixel-wise grasp quality and reachability by the proposed
evaluation metric on synthesized images in a simulator to train an
auto-encoder--decoder called suction graspability U-Net++ (SG-U-Net++).
Experiment results show that our intuitive grasp quality evaluation metric is
competitive with a physically-inspired metric. Learning the reachability helps
to reduce motion planning computation time by removing obviously unreachable
candidates. The system achieves an overall picking speed of 560 PPH (pieces per
hour).
Related papers
- Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - Enhancing Digital Hologram Reconstruction Using Reverse-Attention Loss for Untrained Physics-Driven Deep Learning Models with Uncertain Distance [10.788482076164314]
We present a pioneering approach to addressing the Autofocusing challenge in untrained deep-learning methods.
Our method presents a significant reconstruction performance over rival methods.
For example, the difference is less than 1dB in PSNR and 0.002 in SSIM for the target sample.
arXiv Detail & Related papers (2024-01-11T01:30:46Z) - DeepSimHO: Stable Pose Estimation for Hand-Object Interaction via
Physics Simulation [81.11585774044848]
We present DeepSimHO, a novel deep-learning pipeline that combines forward physics simulation and backward gradient approximation with a neural network.
Our method noticeably improves the stability of the estimation and achieves superior efficiency over test-time optimization.
arXiv Detail & Related papers (2023-10-11T05:34:36Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - Information-Theoretic Odometry Learning [83.36195426897768]
We propose a unified information theoretic framework for learning-motivated methods aimed at odometry estimation.
The proposed framework provides an elegant tool for performance evaluation and understanding in information-theoretic language.
arXiv Detail & Related papers (2022-03-11T02:37:35Z) - Cut and Continuous Paste towards Real-time Deep Fall Detection [12.15584530151789]
We propose a simple and efficient framework to detect falls through a single and small-sized convolutional neural network.
We first introduce a new image synthesis method that represents human motion in a single frame.
At the inference step, we also represent real human motion in a single image by estimating mean of input frames.
arXiv Detail & Related papers (2022-02-22T06:07:16Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Robust Ego and Object 6-DoF Motion Estimation and Tracking [5.162070820801102]
This paper proposes a robust solution to achieve accurate estimation and consistent track-ability for dynamic multi-body visual odometry.
A compact and effective framework is proposed leveraging recent advances in semantic instance-level segmentation and accurate optical flow estimation.
A novel formulation, jointly optimizing SE(3) motion and optical flow is introduced that improves the quality of the tracked points and the motion estimation accuracy.
arXiv Detail & Related papers (2020-07-28T05:12:56Z) - Fast Modeling and Understanding Fluid Dynamics Systems with
Encoder-Decoder Networks [0.0]
We show that an accurate deep-learning-based proxy model can be taught efficiently by a finite-volume-based simulator.
Compared to traditional simulation, the proposed deep learning approach enables much faster forward computation.
We quantify the sensitivity of the deep learning model to key physical parameters and hence demonstrate that the inversion problems can be solved with great acceleration.
arXiv Detail & Related papers (2020-06-09T17:14:08Z) - Gaining a Sense of Touch. Physical Parameters Estimation using a Soft
Gripper and Neural Networks [3.0892724364965005]
There is not enough research on physical parameters estimation using deep learning algorithms on measurements from direct interaction with objects using robotic grippers.
We propose a trainable system for the regression of a stiffness coefficient and provided extensive experiments using the physics simulator environment.
Our system can reliably estimate the stiffness of an object using the Yale OpenHand soft gripper based on readings from Inertial Measurement Units (IMUs) attached to its fingers.
arXiv Detail & Related papers (2020-03-02T11:56:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.