On the Effects of Different Types of Label Noise in Multi-Label Remote
Sensing Image Classification
- URL: http://arxiv.org/abs/2207.13975v1
- Date: Thu, 28 Jul 2022 09:38:30 GMT
- Title: On the Effects of Different Types of Label Noise in Multi-Label Remote
Sensing Image Classification
- Authors: Tom Burgert, Mahdyar Ravanbakhsh, Beg\"um Demir
- Abstract summary: The development of accurate methods for multi-label classification (MLC) of remote sensing (RS) images is one of the most important research topics in RS.
The use of deep neural networks that require a high number of reliable training images annotated by multiple land-cover class labels (multi-labels) have been found popular in RS.
In this paper, we investigate three different noise robust CV SLC methods and adapt them to be robust for multi-label noise scenarios in RS.
- Score: 1.6758573326215689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of accurate methods for multi-label classification (MLC) of
remote sensing (RS) images is one of the most important research topics in RS.
To address MLC problems, the use of deep neural networks that require a high
number of reliable training images annotated by multiple land-cover class
labels (multi-labels) have been found popular in RS. However, collecting such
annotations is time-consuming and costly. A common procedure to obtain
annotations at zero labeling cost is to rely on thematic products or
crowdsourced labels. As a drawback, these procedures come with the risk of
label noise that can distort the learning process of the MLC algorithms. In the
literature, most label noise robust methods are designed for single label
classification (SLC) problems in computer vision (CV), where each image is
annotated by a single label. Unlike SLC, label noise in MLC can be associated
with: 1) subtractive label-noise (a land cover class label is not assigned to
an image while that class is present in the image); 2) additive label-noise (a
land cover class label is assigned to an image although that class is not
present in the given image); and 3) mixed label-noise (a combination of both).
In this paper, we investigate three different noise robust CV SLC methods and
adapt them to be robust for multi-label noise scenarios in RS. During
experiments we study the effects of different types of multi-label noise and
evaluate the adapted methods rigorously. To this end, we also introduce a
synthetic multi-label noise injection strategy that is more adequate to
simulate operational scenarios compared to the uniform label noise injection
strategy, in which the labels of absent and present classes are flipped at
uniform probability. Further, we study the relevance of different evaluation
metrics in MLC problems under noisy multi-labels.
Related papers
- Positive Label Is All You Need for Multi-Label Classification [3.354528906571718]
Multi-label classification (MLC) faces challenges from label noise in training data.
Our paper addresses label noise in MLC by introducing a positive and unlabeled multi-label classification (PU-MLC) method.
PU-MLC employs positive-unlabeled learning, training the model with only positive labels and unlabeled data.
arXiv Detail & Related papers (2023-06-28T08:44:00Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Label Structure Preserving Contrastive Embedding for Multi-Label
Learning with Missing Labels [30.79809627981242]
We introduce a label correction mechanism to identify missing labels, then define a unique contrastive loss for multi-label image classification with missing labels (CLML)
Different from existing multi-label CL losses, CLML also preserves low-rank global and local label dependencies in the latent representation space.
The proposed strategy has been shown to improve the classification performance of the Resnet101 model by margins of 1.2%, 1.6%, and 1.3% respectively on three standard datasets.
arXiv Detail & Related papers (2022-09-03T02:44:07Z) - Large Loss Matters in Weakly Supervised Multi-Label Classification [50.262533546999045]
We first regard unobserved labels as negative labels, casting the W task into noisy multi-label classification.
We propose novel methods for W which reject or correct the large loss samples to prevent model from memorizing the noisy label.
Our methodology actually works well, validating that treating large loss properly matters in a weakly supervised multi-label classification.
arXiv Detail & Related papers (2022-06-08T08:30:24Z) - Dual-Perspective Semantic-Aware Representation Blending for Multi-Label
Image Recognition with Partial Labels [70.36722026729859]
We propose a dual-perspective semantic-aware representation blending (DSRB) that blends multi-granularity category-specific semantic representation across different images.
The proposed DS consistently outperforms current state-of-the-art algorithms on all proportion label settings.
arXiv Detail & Related papers (2022-05-26T00:33:44Z) - Semantic-Aware Representation Blending for Multi-Label Image Recognition
with Partial Labels [86.17081952197788]
We propose to blend category-specific representation across different images to transfer information of known labels to complement unknown labels.
Experiments on the MS-COCO, Visual Genome, Pascal VOC 2007 datasets show that the proposed SARB framework obtains superior performance over current leading competitors.
arXiv Detail & Related papers (2022-03-04T07:56:16Z) - Evaluating Multi-label Classifiers with Noisy Labels [0.7868449549351487]
In the real world, it is more common to deal with noisy datasets than clean datasets.
We present a Context-Based Multi-Label-Classifier (CbMLC) that effectively handles noisy labels.
We show CbMLC yields substantial improvements over the previous methods in most cases.
arXiv Detail & Related papers (2021-02-16T19:50:52Z) - A Second-Order Approach to Learning with Instance-Dependent Label Noise [58.555527517928596]
The presence of label noise often misleads the training of deep neural networks.
We show that the errors in human-annotated labels are more likely to be dependent on the difficulty levels of tasks.
arXiv Detail & Related papers (2020-12-22T06:36:58Z) - CCML: A Novel Collaborative Learning Model for Classification of Remote
Sensing Images with Noisy Multi-Labels [0.9995347522610671]
We propose a novel Consensual Collaborative Multi-Label Learning (CCML) method to alleviate the adverse effects of multi-label noise during the training phase of the CNN model.
CCML identifies, ranks, and corrects noisy multi-labels in RS images based on four main modules.
arXiv Detail & Related papers (2020-12-19T15:42:24Z) - Attention-Aware Noisy Label Learning for Image Classification [97.26664962498887]
Deep convolutional neural networks (CNNs) learned on large-scale labeled samples have achieved remarkable progress in computer vision.
The cheapest way to obtain a large body of labeled visual data is to crawl from websites with user-supplied labels, such as Flickr.
This paper proposes the attention-aware noisy label learning approach to improve the discriminative capability of the network trained on datasets with potential label noise.
arXiv Detail & Related papers (2020-09-30T15:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.