Co-mining: Self-Supervised Learning for Sparsely Annotated Object
Detection
- URL: http://arxiv.org/abs/2012.01950v1
- Date: Thu, 3 Dec 2020 14:23:43 GMT
- Title: Co-mining: Self-Supervised Learning for Sparsely Annotated Object
Detection
- Authors: Tiancai Wang, Tong Yang, Jiale Cao, Xiangyu Zhang
- Abstract summary: We propose a simple but effective mechanism, called Co-mining, for sparsely annotated object detection.
In our Co-mining, two branches of a Siamese network predict the pseudo-label sets for each other.
Experiments are performed on MS dataset with three different sparsely annotated settings.
- Score: 29.683119976550007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detectors usually achieve promising results with the supervision of
complete instance annotations. However, their performance is far from
satisfactory with sparse instance annotations. Most existing methods for
sparsely annotated object detection either re-weight the loss of hard negative
samples or convert the unlabeled instances into ignored regions to reduce the
interference of false negatives. We argue that these strategies are
insufficient since they can at most alleviate the negative effect caused by
missing annotations. In this paper, we propose a simple but effective
mechanism, called Co-mining, for sparsely annotated object detection. In our
Co-mining, two branches of a Siamese network predict the pseudo-label sets for
each other. To enhance multi-view learning and better mine unlabeled instances,
the original image and corresponding augmented image are used as the inputs of
two branches of the Siamese network, respectively. Co-mining can serve as a
general training mechanism applied to most of modern object detectors.
Experiments are performed on MS COCO dataset with three different sparsely
annotated settings using two typical frameworks: anchor-based detector
RetinaNet and anchor-free detector FCOS. Experimental results show that our
Co-mining with RetinaNet achieves 1.4%~2.1% improvements compared with
different baselines and surpasses existing methods under the same sparsely
annotated setting.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.