SparseDet: Improving Sparsely Annotated Object Detection with
Pseudo-positive Mining
- URL: http://arxiv.org/abs/2201.04620v2
- Date: Sun, 27 Aug 2023 02:02:24 GMT
- Title: SparseDet: Improving Sparsely Annotated Object Detection with
Pseudo-positive Mining
- Authors: Saksham Suri, Sai Saketh Rambhatla, Rama Chellappa, Abhinav
Shrivastava
- Abstract summary: We propose an end-to-end system that learns to separate proposals into labeled and unlabeled regions using Pseudo-positive mining.
While the labeled regions are processed as usual, self-supervised learning is used to process the unlabeled regions.
We conduct exhaustive experiments on five splits on the PASCAL-VOC and COCO datasets achieving state-of-the-art performance.
- Score: 76.95808270536318
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training with sparse annotations is known to reduce the performance of object
detectors. Previous methods have focused on proxies for missing ground truth
annotations in the form of pseudo-labels for unlabeled boxes. We observe that
existing methods suffer at higher levels of sparsity in the data due to noisy
pseudo-labels. To prevent this, we propose an end-to-end system that learns to
separate the proposals into labeled and unlabeled regions using Pseudo-positive
mining. While the labeled regions are processed as usual, self-supervised
learning is used to process the unlabeled regions thereby preventing the
negative effects of noisy pseudo-labels. This novel approach has multiple
advantages such as improved robustness to higher sparsity when compared to
existing methods. We conduct exhaustive experiments on five splits on the
PASCAL-VOC and COCO datasets achieving state-of-the-art performance. We also
unify various splits used across literature for this task and present a
standardized benchmark. On average, we improve by $2.6$, $3.9$ and $9.6$ mAP
over previous state-of-the-art methods on three splits of increasing sparsity
on COCO. Our project is publicly available at
https://www.cs.umd.edu/~sakshams/SparseDet.
Related papers
- Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Robust Assignment of Labels for Active Learning with Sparse and Noisy
Annotations [0.17188280334580192]
Supervised classification algorithms are used to solve a growing number of real-life problems around the globe.
Unfortunately, acquiring good-quality annotations for many tasks is infeasible or too expensive to be done in practice.
We propose two novel annotation unification algorithms that utilize unlabeled parts of the sample space.
arXiv Detail & Related papers (2023-07-25T19:40:41Z) - All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Class-Distribution-Aware Pseudo Labeling for Semi-Supervised Multi-Label
Learning [97.88458953075205]
Pseudo-labeling has emerged as a popular and effective approach for utilizing unlabeled data.
This paper proposes a novel solution called Class-Aware Pseudo-Labeling (CAP) that performs pseudo-labeling in a class-aware manner.
arXiv Detail & Related papers (2023-05-04T12:52:18Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - Multi-label Classification with Partial Annotations using Class-aware
Selective Loss [14.3159150577502]
Large-scale multi-label classification datasets are commonly partially annotated.
We analyze the partial labeling problem, then propose a solution based on two key ideas.
With our novel approach, we achieve state-of-the-art results on OpenImages dataset.
arXiv Detail & Related papers (2021-10-21T08:10:55Z) - Learning with Noisy Labels by Targeted Relabeling [52.0329205268734]
Crowdsourcing platforms are often used to collect datasets for training deep neural networks.
We propose an approach which reserves a fraction of annotations to explicitly relabel highly probable labeling errors.
arXiv Detail & Related papers (2021-10-15T20:37:29Z) - EvidentialMix: Learning with Combined Open-set and Closed-set Noisy
Labels [30.268962418683955]
We study a new variant of the noisy label problem that combines the open-set and closed-set noisy labels.
Our results show that our method produces superior classification results and better feature representations than previous state-of-the-art methods.
arXiv Detail & Related papers (2020-11-11T11:15:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.