A Benchmark of Long-tailed Instance Segmentation with Noisy Labels
- URL: http://arxiv.org/abs/2211.13435v2
- Date: Sat, 15 Jul 2023 08:42:40 GMT
- Title: A Benchmark of Long-tailed Instance Segmentation with Noisy Labels
- Authors: Guanlin Li, Guowen Xu, Tianwei Zhang
- Abstract summary: In this paper, we consider the instance segmentation task on a long-tailed dataset, which contains label noise.
We propose a new dataset, which is a large vocabulary long-tailed dataset containing label noise for instance segmentation.
The results indicate that the noise in the training dataset will hamper the model in learning rare categories and decrease the overall performance.
- Score: 14.977028531774945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider the instance segmentation task on a long-tailed
dataset, which contains label noise, i.e., some of the annotations are
incorrect. There are two main reasons making this case realistic. First,
datasets collected from real world usually obey a long-tailed distribution.
Second, for instance segmentation datasets, as there are many instances in one
image and some of them are tiny, it is easier to introduce noise into the
annotations. Specifically, we propose a new dataset, which is a large
vocabulary long-tailed dataset containing label noise for instance
segmentation. Furthermore, we evaluate previous proposed instance segmentation
algorithms on this dataset. The results indicate that the noise in the training
dataset will hamper the model in learning rare categories and decrease the
overall performance, and inspire us to explore more effective approaches to
address this practical challenge. The code and dataset are available in
https://github.com/GuanlinLee/Noisy-LVIS.
Related papers
- Scribbles for All: Benchmarking Scribble Supervised Segmentation Across Datasets [51.74296438621836]
We introduce Scribbles for All, a label and training data generation algorithm for semantic segmentation trained on scribble labels.
The main limitation of scribbles as source for weak supervision is the lack of challenging datasets for scribble segmentation.
Scribbles for All provides scribble labels for several popular segmentation datasets and provides an algorithm to automatically generate scribble labels for any dataset with dense annotations.
arXiv Detail & Related papers (2024-08-22T15:29:08Z) - Label-Noise Learning with Intrinsically Long-Tailed Data [65.41318436799993]
We propose a learning framework for label-noise learning with intrinsically long-tailed data.
Specifically, we propose two-stage bi-dimensional sample selection (TABASCO) to better separate clean samples from noisy samples.
arXiv Detail & Related papers (2022-08-21T07:47:05Z) - Iterative Learning for Instance Segmentation [0.0]
State-of-the-art deep neural network models require large amounts of labeled data in order to perform well in this task.
We propose for the first time, an iterative learning and annotation method that is able to detect, segment and annotate instances in datasets composed of multiple similar objects.
Experiments on two different datasets show the validity of the approach in different applications related to visual inspection.
arXiv Detail & Related papers (2022-02-18T10:25:02Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Addressing out-of-distribution label noise in webly-labelled data [8.625286650577134]
Data gathering and annotation using a search engine is a simple alternative to generating a fully human-annotated dataset.
Although web crawling is very time efficient, some of the retrieved images are unavoidably noisy.
Design robust algorithms for training on noisy data gathered from the web is an important research perspective.
arXiv Detail & Related papers (2021-10-26T13:38:50Z) - Learning with Noisy Labels by Targeted Relabeling [52.0329205268734]
Crowdsourcing platforms are often used to collect datasets for training deep neural networks.
We propose an approach which reserves a fraction of annotations to explicitly relabel highly probable labeling errors.
arXiv Detail & Related papers (2021-10-15T20:37:29Z) - EvidentialMix: Learning with Combined Open-set and Closed-set Noisy
Labels [30.268962418683955]
We study a new variant of the noisy label problem that combines the open-set and closed-set noisy labels.
Our results show that our method produces superior classification results and better feature representations than previous state-of-the-art methods.
arXiv Detail & Related papers (2020-11-11T11:15:32Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z) - DenoiSeg: Joint Denoising and Segmentation [75.91760529986958]
We propose DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations.
We achieve this by extending Noise2Void, a self-supervised denoising scheme that can be trained on noisy images alone, to also predict dense 3-class segmentations.
arXiv Detail & Related papers (2020-05-06T17:42:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.