Scoring Graspability based on Grasp Regression for Better Grasp
Prediction
- URL: http://arxiv.org/abs/2002.00872v3
- Date: Wed, 31 Mar 2021 08:09:26 GMT
- Title: Scoring Graspability based on Grasp Regression for Better Grasp
Prediction
- Authors: Amaury Depierre (imagine), Emmanuel Dellandr\'ea (imagine), Liming
Chen (imagine)
- Abstract summary: Current state-of-the-art methods rely on deep neural networks trained to jointly predict a graspability score together with a regression of an offset with respect to grasp reference parameters.
In this paper, we extend a state-of-the-art neural network with a scorer that evaluates the graspability of a given position, and introduce a novel loss function which correlates regression of grasp parameters with graspability score.
- Score: 2.835565391455372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grasping objects is one of the most important abilities that a robot needs to
master in order to interact with its environment. Current state-of-the-art
methods rely on deep neural networks trained to jointly predict a graspability
score together with a regression of an offset with respect to grasp reference
parameters. However, these two predictions are performed independently, which
can lead to a decrease in the actual graspability score when applying the
predicted offset. Therefore, in this paper, we extend a state-of-the-art neural
network with a scorer that evaluates the graspability of a given position, and
introduce a novel loss function which correlates regression of grasp parameters
with graspability score. We show that this novel architecture improves
performance from 82.13% for a state-of-the-art grasp detection network to
85.74% on Jacquard dataset. When the learned model is transferred onto a real
robot, the proposed method correlating graspability and grasp regression
achieves a 92.4% rate compared to 88.1% for the baseline trained without the
correlation.
Related papers
- VIKING: Deep variational inference with stochastic projections [48.946143517489496]
Variational mean field approximations tend to struggle with contemporary overparametrized deep neural networks.<n>We propose a simple variational family that considers two independent linear subspaces of the parameter space.<n>This allows us to build a fully-correlated approximate posterior reflecting the overparametrization.
arXiv Detail & Related papers (2025-10-27T15:38:35Z) - Automatic debiasing of neural networks via moment-constrained learning [0.0]
Naively learning the regression function and taking a sample mean of the target functional results in biased estimators.
We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in automatic debiasing.
arXiv Detail & Related papers (2024-09-29T20:56:54Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - The Effectiveness of a Dynamic Loss Function in Neural Network Based
Automated Essay Scoring [0.0]
We present a dynamic loss function that creates an incentive for the model to predict with the correct distribution, as well as predicting the correct values.
Our loss function achieves this goal without sacrificing any performance achieving a Quadratic Weighted Kappa score of 0.752 on the Automated Student Assessment Prize Automated Essay Scoring dataset.
arXiv Detail & Related papers (2023-05-15T16:39:35Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Regression modelling of spatiotemporal extreme U.S. wildfires via
partially-interpretable neural networks [0.0]
We propose a new methodological framework for performing extreme quantile regression using artificial neutral networks.
We unify linear, and additive, regression methodology with deep learning to create partially-interpretable neural networks.
arXiv Detail & Related papers (2022-08-16T07:42:53Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Robust Learning via Persistency of Excitation [4.674053902991301]
We show that network training using gradient descent is equivalent to a dynamical system parameter estimation problem.
We provide an efficient technique for estimating the corresponding Lipschitz constant using extreme value theory.
Our approach also universally increases the adversarial accuracy by 0.1% to 0.3% points in various state-of-the-art adversarially trained models.
arXiv Detail & Related papers (2021-06-03T18:49:05Z) - Interpretable Social Anchors for Human Trajectory Forecasting in Crowds [84.20437268671733]
We propose a neural network-based system to predict human trajectory in crowds.
We learn interpretable rule-based intents, and then utilise the expressibility of neural networks to model scene-specific residual.
Our architecture is tested on the interaction-centric benchmark TrajNet++.
arXiv Detail & Related papers (2021-05-07T09:22:34Z) - Implicit Under-Parameterization Inhibits Data-Efficient Deep
Reinforcement Learning [97.28695683236981]
More gradient updates decrease the expressivity of the current value network.
We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings.
arXiv Detail & Related papers (2020-10-27T17:55:16Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.