A Simple Semi-Supervised Learning Framework for Object Detection
- URL: http://arxiv.org/abs/2005.04757v2
- Date: Thu, 3 Dec 2020 04:12:25 GMT
- Title: A Simple Semi-Supervised Learning Framework for Object Detection
- Authors: Kihyuk Sohn, Zizhao Zhang, Chun-Liang Li, Han Zhang, Chen-Yu Lee, and
Tomas Pfister
- Abstract summary: Semi-supervised learning (SSL) has a potential to improve the predictive performance of machine learning models using unlabeled data.
We propose STAC, a simple yet effective SSL framework for visual object detection along with a data augmentation strategy.
- Score: 55.95789931533665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning (SSL) has a potential to improve the predictive
performance of machine learning models using unlabeled data. Although there has
been remarkable recent progress, the scope of demonstration in SSL has mainly
been on image classification tasks. In this paper, we propose STAC, a simple
yet effective SSL framework for visual object detection along with a data
augmentation strategy. STAC deploys highly confident pseudo labels of localized
objects from an unlabeled image and updates the model by enforcing consistency
via strong augmentations. We propose experimental protocols to evaluate the
performance of semi-supervised object detection using MS-COCO and show the
efficacy of STAC on both MS-COCO and VOC07. On VOC07, STAC improves the
AP$^{0.5}$ from $76.30$ to $79.08$; on MS-COCO, STAC demonstrates $2{\times}$
higher data efficiency by achieving 24.38 mAP using only 5\% labeled data than
supervised baseline that marks 23.86\% using 10\% labeled data. The code is
available at https://github.com/google-research/ssl_detection/.
Related papers
- The Devil is in the Points: Weakly Semi-Supervised Instance Segmentation
via Point-Guided Mask Representation [61.027468209465354]
We introduce a novel learning scheme named weakly semi-supervised instance segmentation (WSSIS) with point labels.
We propose a method for WSSIS that can effectively leverage the budget-friendly point labels as a powerful weak supervision source.
We conduct extensive experiments on COCO and BDD100K datasets, and the proposed method achieves promising results comparable to those of the fully-supervised model.
arXiv Detail & Related papers (2023-03-27T10:11:22Z) - ESTAS: Effective and Stable Trojan Attacks in Self-supervised Encoders
with One Target Unlabelled Sample [16.460288815336902]
ESTAS achieves > 99% attacks success rate (ASR) with one target-class sample.
Compared to prior works, ESTAS attains > 30% ASR increase and > 8.3% accuracy improvement on average.
arXiv Detail & Related papers (2022-11-20T08:58:34Z) - A semi-supervised Teacher-Student framework for surgical tool detection
and localization [2.41710192205034]
We introduce a semi-supervised learning (SSL) framework in surgical tool detection paradigm.
In the proposed work, we train a model with labeled data which initialises the Teacher-Student joint learning.
Our results on m2cai16-tool-locations dataset indicate the superiority of our approach on different supervised data settings.
arXiv Detail & Related papers (2022-08-21T17:21:31Z) - Pseudo-Labeling Based Practical Semi-Supervised Meta-Training for Few-Shot Learning [93.63638405586354]
We propose a simple and effective meta-training framework, called pseudo-labeling based meta-learning (PLML)
Firstly, we train a classifier via common semi-supervised learning (SSL) and use it to obtain the pseudo-labels of unlabeled data.
We build few-shot tasks from labeled and pseudo-labeled data and design a novel finetuning method with feature smoothing and noise suppression.
arXiv Detail & Related papers (2022-07-14T10:53:53Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Humble Teachers Teach Better Students for Semi-Supervised Object
Detection [7.764145630268344]
Our model achieves COCO-style AP of 53.04% on VOC07 val set, 8.4% better than STAC, when using VOC12 as unlabeled data.
It also reaches 53.8% AP on MS-COCO test-dev with 3.1% gain over the fully supervised ResNet-152 Cascaded R-CNN.
arXiv Detail & Related papers (2021-06-19T09:05:10Z) - Instant-Teaching: An End-to-End Semi-Supervised Object Detection
Framework [14.914115746675176]
Semi-supervised object detection can leverage unlabeled data to improve the model performance.
We propose Instant-Teaching, which uses instant pseudo labeling with extended weak-strong data augmentations for teaching during each training iteration.
Our method surpasses state-of-the-art methods by 4.2 mAP on MS-COCO when using $2%$ labeled data.
arXiv Detail & Related papers (2021-03-21T14:03:36Z) - Unbiased Teacher for Semi-Supervised Object Detection [50.0087227400306]
We revisit the Semi-Supervised Object Detection (SS-OD) and identify the pseudo-labeling bias issue in SS-OD.
We introduce Unbiased Teacher, a simple yet effective approach that jointly trains a student and a gradually progressing teacher in a mutually-beneficial manner.
arXiv Detail & Related papers (2021-02-18T17:02:57Z) - Unsupervised Semantic Aggregation and Deformable Template Matching for
Semi-Supervised Learning [34.560447389853614]
Unsupervised semantic aggregation based on T-MI loss is explored to generate semantic labels for unlabeled data.
A feature pool that stores the labeled samples is dynamically updated to assign proxy labels for unlabeled data.
Experiments and analysis validate that USADTM achieves top performance.
arXiv Detail & Related papers (2020-10-12T08:17:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.