Robust Loss Functions for Object Grasping under Limited Ground Truth
- URL: http://arxiv.org/abs/2409.05742v1
- Date: Mon, 9 Sep 2024 15:56:34 GMT
- Title: Robust Loss Functions for Object Grasping under Limited Ground Truth
- Authors: Yangfan Deng, Mengyao Zhang, Yong Zhao,
- Abstract summary: We deal with missing or noisy ground truth while training the convolutional neural network.
For missing ground truth, a new predicted category probability method is defined for unlabeled samples.
For noisy ground truth, a symmetric loss function is introduced to resist the corruption of label noises.
- Score: 3.794161613920474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object grasping is a crucial technology enabling robots to perceive and interact with the environment sufficiently. However, in practical applications, researchers are faced with missing or noisy ground truth while training the convolutional neural network, which decreases the accuracy of the model. Therefore, different loss functions are proposed to deal with these problems to improve the accuracy of the neural network. For missing ground truth, a new predicted category probability method is defined for unlabeled samples, which works effectively in conjunction with the pseudo-labeling method. Furthermore, for noisy ground truth, a symmetric loss function is introduced to resist the corruption of label noises. The proposed loss functions are powerful, robust, and easy to use. Experimental results based on the typical grasping neural network show that our method can improve performance by 2 to 13 percent.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Mitigating the Impact of Labeling Errors on Training via Rockafellian Relaxation [0.8741284539870512]
We propose and study the implementation of Rockafellian Relaxation (RR) for neural network training.
RR can enhance standard neural network methods to achieve robust performance across classification tasks.
We find that RR can mitigate the effects of dataset corruption due to both (heavy) labeling error and/or adversarial perturbation.
arXiv Detail & Related papers (2024-05-30T23:13:01Z) - SGD method for entropy error function with smoothing l0 regularization for neural networks [3.108634881604788]
entropy error function has been widely used in neural networks.
We propose a novel entropy function with smoothing l0 regularization for feed-forward neural networks.
Our work is novel as it enables neural networks to learn effectively, producing more accurate predictions.
arXiv Detail & Related papers (2024-05-28T19:54:26Z) - Doubly Robust Causal Effect Estimation under Networked Interference via Targeted Learning [24.63284452991301]
We propose a doubly robust causal effect estimator under networked interference.
Specifically, we generalize the targeted learning technique into the networked interference setting.
We devise an end-to-end causal effect estimator by transforming the identified theoretical condition into a targeted loss.
arXiv Detail & Related papers (2024-05-06T10:49:51Z) - Noise-Robust Loss Functions: Enhancing Bounded Losses for Large-Scale Noisy Data Learning [0.0]
Large annotated datasets inevitably contain noisy labels, which poses a major challenge for training deep neural networks as they easily memorize the labels.
Noise-robust loss functions have emerged as a notable strategy to counteract this issue, but it remains challenging to create a robust loss function which is not susceptible to underfitting.
We propose a novel method denoted as logit bias, which adds a real number $epsilon$ to the logit at the position of the correct class.
arXiv Detail & Related papers (2023-06-08T18:38:55Z) - Fast Exploration of the Impact of Precision Reduction on Spiking Neural
Networks [63.614519238823206]
Spiking Neural Networks (SNNs) are a practical choice when the target hardware reaches the edge of computing.
We employ an Interval Arithmetic (IA) model to develop an exploration methodology that takes advantage of the capability of such a model to propagate the approximation error.
arXiv Detail & Related papers (2022-11-22T15:08:05Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - FaultFace: Deep Convolutional Generative Adversarial Network (DCGAN)
based Ball-Bearing Failure Detection Method [4.543665832042712]
This paper proposes a methodology called FaultFace for failure detection on Ball-Bearing joints for rotational shafts.
Deep Convolutional Generative Adversarial Network is employed to produce new faceportraits of the nominal and failure behaviors to get a balanced dataset.
arXiv Detail & Related papers (2020-07-30T06:37:53Z) - Feature Purification: How Adversarial Training Performs Robust Deep
Learning [66.05472746340142]
We show a principle that we call Feature Purification, where we show one of the causes of the existence of adversarial examples is the accumulation of certain small dense mixtures in the hidden weights during the training process of a neural network.
We present both experiments on the CIFAR-10 dataset to illustrate this principle, and a theoretical result proving that for certain natural classification tasks, training a two-layer neural network with ReLU activation using randomly gradient descent indeed this principle.
arXiv Detail & Related papers (2020-05-20T16:56:08Z) - Explicitly Trained Spiking Sparsity in Spiking Neural Networks with
Backpropagation [7.952659059689134]
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency resulting from sparse, event-driven computations.
We propose an explicit inclusion of spike counts in the loss function, along with a traditional error loss, to optimize weight parameters for both accuracy and spiking sparsity.
arXiv Detail & Related papers (2020-03-02T23:39:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.