A Study of Deep CNN Model with Labeling Noise Based on Granular-ball
Computing
- URL: http://arxiv.org/abs/2207.08810v1
- Date: Sun, 17 Jul 2022 13:58:46 GMT
- Title: A Study of Deep CNN Model with Labeling Noise Based on Granular-ball
Computing
- Authors: Dawei Dai, Donggen Li, Zhiguo Zhuang
- Abstract summary: Granular ball computing is an efficient, robust and scalable learning method.
In this paper, we pioneered a granular ball neural network algorithm model, which adopts the idea of multi-granular to filter label noise samples during model training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In supervised learning, the presence of noise can have a significant impact
on decision making. Since many classifiers do not take label noise into account
in the derivation of the loss function, including the loss functions of
logistic regression, SVM, and AdaBoost, especially the AdaBoost iterative
algorithm, whose core idea is to continuously increase the weight value of the
misclassified samples, the weight of samples in many presence of label noise
will be increased, leading to a decrease in model accuracy. In addition, the
learning process of BP neural network and decision tree will also be affected
by label noise. Therefore, solving the label noise problem is an important
element of maintaining the robustness of the network model, which is of great
practical significance. Granular ball computing is an important modeling method
developed in the field of granular computing in recent years, which is an
efficient, robust and scalable learning method. In this paper, we pioneered a
granular ball neural network algorithm model, which adopts the idea of
multi-granular to filter label noise samples during model training, solving the
current problem of model instability caused by label noise in the field of deep
learning, greatly reducing the proportion of label noise in training samples
and improving the robustness of neural network models.
Related papers
- Analyze the Robustness of Classifiers under Label Noise [5.708964539699851]
Label noise in supervised learning, characterized by erroneous or imprecise labels, significantly impairs model performance.
This research focuses on the increasingly pertinent issue of label noise's impact on practical applications.
arXiv Detail & Related papers (2023-12-12T13:51:25Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise [50.27105057899601]
We present a large empirical study quantifying the sometimes severe loss in performance from different types of input noise for a range of datasets and model sizes.
We propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any training, auxiliary models, or even prior knowledge of the type of noise.
arXiv Detail & Related papers (2022-12-20T00:33:11Z) - Establishment of Neural Networks Robust to Label Noise [0.0]
In this paper, we have examined the fundamental concept underlying related label noise approaches.
A transition matrix estimator has been created, and its effectiveness against the actual transition matrix has been demonstrated.
We are not efficiently able to demonstrate the influence of the transition matrix noise correction on robustness enhancements due to our inability to correctly tune the complex convolutional neural network model.
arXiv Detail & Related papers (2022-11-28T13:07:23Z) - Learning to Rectify for Robust Learning with Noisy Labels [25.149277009932423]
We propose warped probabilistic inference (WarPI) to achieve adaptively rectifying the training procedure for the classification network.
We evaluate WarPI on four benchmarks of robust learning with noisy labels and achieve the new state-of-the-art under variant noise types.
arXiv Detail & Related papers (2021-11-08T02:25:50Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Robust Learning of Recurrent Neural Networks in Presence of Exogenous
Noise [22.690064709532873]
We propose a tractable robustness analysis for RNN models subject to input noise.
The robustness measure can be estimated efficiently using linearization techniques.
Our proposed methodology significantly improves robustness of recurrent neural networks.
arXiv Detail & Related papers (2021-05-03T16:45:05Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z) - Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization [26.270754571140735]
PIM accelerators often suffer from intrinsic noise in the physical components.
We propose a noise-agnostic method to achieve robust neural network performance against any noise setting.
arXiv Detail & Related papers (2020-07-07T06:51:28Z) - Rectified Meta-Learning from Noisy Labels for Robust Image-based Plant
Disease Diagnosis [64.82680813427054]
Plant diseases serve as one of main threats to food security and crop production.
One popular approach is to transform this problem as a leaf image classification task, which can be addressed by the powerful convolutional neural networks (CNNs)
We propose a novel framework that incorporates rectified meta-learning module into common CNN paradigm to train a noise-robust deep network without using extra supervision information.
arXiv Detail & Related papers (2020-03-17T09:51:30Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.