Penalizing Proposals using Classifiers for Semi-Supervised Object
Detection
- URL: http://arxiv.org/abs/2205.13219v1
- Date: Thu, 26 May 2022 08:30:48 GMT
- Title: Penalizing Proposals using Classifiers for Semi-Supervised Object
Detection
- Authors: Somnath Hazra, Pallab Dasgupta
- Abstract summary: We propose a modified loss function to train on large silver standard annotated sets generated by a weak annotator.
We include a confidence metric associated with the annotation as an additional term in the loss function, signifying the quality of the annotation.
In comparison with the baseline where no confidence metric is used, we achieved a 4% gain in mAP with 25% labeled data and 10% gain in mAP with 50% labeled data.
- Score: 2.8522223112994833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obtaining gold standard annotated data for object detection is often costly,
involving human-level effort. Semi-supervised object detection algorithms solve
the problem with a small amount of gold-standard labels and a large unlabelled
dataset used to generate silver-standard labels. But training on the silver
standard labels does not produce good results, because they are
machine-generated annotations. In this work, we design a modified loss function
to train on large silver standard annotated sets generated by a weak annotator.
We include a confidence metric associated with the annotation as an additional
term in the loss function, signifying the quality of the annotation. We test
the effectiveness of our approach on various test sets and use numerous
variations to compare the results with some of the current approaches to object
detection. In comparison with the baseline where no confidence metric is used,
we achieved a 4\% gain in mAP with 25\% labeled data and 10\% gain in mAP with
50\% labeled data by using the proposed confidence metric.
Related papers
- Self Adaptive Threshold Pseudo-labeling and Unreliable Sample Contrastive Loss for Semi-supervised Image Classification [6.920336485308536]
Pseudo-labeling-based semi-supervised approaches suffer from two problems in image classification.
We develop a self adaptive threshold pseudo-labeling strategy, which thresholds for each class can be dynamically adjusted to increase the number of reliable samples.
In order to effectively utilise unlabeled data with confidence below the thresholds, we propose an unreliable sample contrastive loss.
arXiv Detail & Related papers (2024-07-04T03:04:56Z) - Differential Analysis of Triggers and Benign Features for Black-Box DNN
Backdoor Detection [18.481370450591317]
This paper proposes a data-efficient detection method for deep neural networks against backdoor attacks under a black-box scenario.
To measure the effects of triggers and benign features on determining the backdoored network output, we introduce five metrics.
We show the efficacy of our methodology through a broad range of backdoor attacks, including ablation studies and comparison to existing approaches.
arXiv Detail & Related papers (2023-07-11T16:39:43Z) - Identifying Label Errors in Object Detection Datasets by Loss Inspection [4.442111891959355]
We introduce a benchmark for label error detection methods on object detection datasets.
We simulate four different types of randomly introduced label errors on train and test sets of well-labeled object detection datasets.
arXiv Detail & Related papers (2023-03-13T10:54:52Z) - Guiding Pseudo-labels with Uncertainty Estimation for Test-Time
Adaptation [27.233704767025174]
Test-Time Adaptation (TTA) is a specific case of Unsupervised Domain Adaptation (UDA) where a model is adapted to a target domain without access to source data.
We propose a novel approach for the TTA setting based on a loss reweighting strategy that brings robustness against the noise that inevitably affects the pseudo-labels.
arXiv Detail & Related papers (2023-03-07T10:04:55Z) - Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious
Attribute Estimation [72.92329724600631]
We propose a pseudo-attribute-based algorithm, coined Spread Spurious Attribute, for improving the worst-group accuracy.
Our experiments on various benchmark datasets show that our algorithm consistently outperforms the baseline methods.
We also demonstrate that the proposed SSA can achieve comparable performances to methods using full (100%) spurious attribute supervision.
arXiv Detail & Related papers (2022-04-05T09:08:30Z) - Learning with Noisy Labels by Targeted Relabeling [52.0329205268734]
Crowdsourcing platforms are often used to collect datasets for training deep neural networks.
We propose an approach which reserves a fraction of annotations to explicitly relabel highly probable labeling errors.
arXiv Detail & Related papers (2021-10-15T20:37:29Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z) - Evaluating Models' Local Decision Boundaries via Contrast Sets [119.38387782979474]
We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data.
We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets.
Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets.
arXiv Detail & Related papers (2020-04-06T14:47:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.