Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection
- URL: http://arxiv.org/abs/2104.14082v1
- Date: Thu, 29 Apr 2021 02:48:47 GMT
- Title: Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection
- Authors: Jiachen Li, Bowen Cheng, Rogerio Feris, Jinjun Xiong, Thomas S.Huang,
Wen-Mei Hwu and Humphrey Shi
- Abstract summary: Current anchor-free object detectors are quite simple and effective yet lack accurate label assignment methods.
We present Pseudo-Intersection-over-Union(Pseudo-IoU): a simple metric that brings more standardized and accurate assignment rule into anchor-free object detection frameworks.
Our method achieves comparable performance to other recent state-of-the-art anchor-free methods without bells and whistles.
- Score: 60.522877583407904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current anchor-free object detectors are quite simple and effective yet lack
accurate label assignment methods, which limits their potential in competing
with classic anchor-based models that are supported by well-designed assignment
methods based on the Intersection-over-Union~(IoU) metric. In this paper, we
present \textbf{Pseudo-Intersection-over-Union~(Pseudo-IoU)}: a simple metric
that brings more standardized and accurate assignment rule into anchor-free
object detection frameworks without any additional computational cost or extra
parameters for training and testing, making it possible to further improve
anchor-free object detection by utilizing training samples of good quality
under effective assignment rules that have been previously applied in
anchor-based methods. By incorporating Pseudo-IoU metric into an end-to-end
single-stage anchor-free object detection framework, we observe consistent
improvements in their performance on general object detection benchmarks such
as PASCAL VOC and MSCOCO. Our method (single-model and single-scale) also
achieves comparable performance to other recent state-of-the-art anchor-free
methods without bells and whistles. Our code is based on mmdetection toolbox
and will be made publicly available at
https://github.com/SHI-Labs/Pseudo-IoU-for-Anchor-Free-Object-Detection.
Related papers
- Leveraging Anchor-based LiDAR 3D Object Detection via Point Assisted
Sample Selection [40.005411891186874]
This paper introduces a new training sample selection method that utilizes point cloud distribution for anchor sample quality measurement.
Experimental results demonstrate that the application of PASS elevates the average precision of anchor-based LiDAR 3D object detectors to a novel state-of-the-art.
arXiv Detail & Related papers (2024-03-04T12:20:40Z) - Exploiting Low-confidence Pseudo-labels for Source-free Object Detection [54.98300313452037]
Source-free object detection (SFOD) aims to adapt a source-trained detector to an unlabeled target domain without access to the labeled source data.
Current SFOD methods utilize a threshold-based pseudo-label approach in the adaptation phase.
We propose a new approach to take full advantage of pseudo-labels by introducing high and low confidence thresholds.
arXiv Detail & Related papers (2023-10-19T12:59:55Z) - Dense Learning based Semi-Supervised Object Detection [46.885301243656045]
Semi-supervised object detection (SSOD) aims to facilitate the training and deployment of object detectors with the help of a large amount of unlabeled data.
In this paper, we propose a DenSe Learning based anchor-free SSOD algorithm.
Experiments are conducted on MS-COCO and PASCAL-VOC, and the results show that our proposed DSL method records new state-of-the-art SSOD performance.
arXiv Detail & Related papers (2022-04-15T02:31:02Z) - Dynamic Anchor Learning for Arbitrary-Oriented Object Detection [4.247967690041766]
Arbitrary-oriented objects widely appear in natural scenes, aerial photographs, remote sensing images, etc.
Current rotation detectors use plenty of anchors with different orientations to achieve spatial alignment with ground truth boxes.
We propose a dynamic anchor learning (DAL) method, which utilizes the newly defined matching degree.
arXiv Detail & Related papers (2020-12-08T01:30:06Z) - Probabilistic Anchor Assignment with IoU Prediction for Object Detection [9.703212439661097]
In object detection, determining which anchors to assign as positive or negative samples, known as anchor assignment, has been revealed as a core procedure that can significantly affect a model's performance.
We propose a novel anchor assignment strategy that adaptively separates anchors into positive and negative samples for a ground truth bounding box according to the model's learning status.
arXiv Detail & Related papers (2020-07-16T04:26:57Z) - Ocean: Object-aware Anchor-free Tracking [75.29960101993379]
The regression network in anchor-based methods is only trained on the positive anchor boxes.
We propose a novel object-aware anchor-free network to address this issue.
Our anchor-free tracker achieves state-of-the-art performance on five benchmarks.
arXiv Detail & Related papers (2020-06-18T17:51:39Z) - FCOS: A simple and strong anchor-free object detector [111.87691210818194]
We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion.
Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes.
In contrast, our proposed detector FCOS is anchor box free, as well as proposal free.
arXiv Detail & Related papers (2020-06-14T01:03:39Z) - Scope Head for Accurate Localization in Object Detection [135.9979405835606]
We propose a novel detector coined as ScopeNet, which models anchors of each location as a mutually dependent relationship.
With our concise and effective design, the proposed ScopeNet achieves state-of-the-art results on COCO.
arXiv Detail & Related papers (2020-05-11T04:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.