Localization Uncertainty-Based Attention for Object Detection
- URL: http://arxiv.org/abs/2108.11042v1
- Date: Wed, 25 Aug 2021 04:32:39 GMT
- Title: Localization Uncertainty-Based Attention for Object Detection
- Authors: Sanghun Park, Kunhee Kim, Eunseop Lee and Daijin Kim
- Abstract summary: We propose a more efficient uncertainty-aware dense detector (UADET) that predicts four-directional localization uncertainties via Gaussian modeling.
Experiments using the MS COCO benchmark show that our UADET consistently surpasses baseline FCOS, and that our best model, ResNext-64x4d-101-DCN, obtains a single model, single-scale AP of 48.3% on COCO test-dev.
- Score: 8.154943252001848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection has been applied in a wide variety of real world scenarios,
so detection algorithms must provide confidence in the results to ensure that
appropriate decisions can be made based on their results. Accordingly, several
studies have investigated the probabilistic confidence of bounding box
regression. However, such approaches have been restricted to anchor-based
detectors, which use box confidence values as additional screening scores
during non-maximum suppression (NMS) procedures. In this paper, we propose a
more efficient uncertainty-aware dense detector (UADET) that predicts
four-directional localization uncertainties via Gaussian modeling. Furthermore,
a simple uncertainty attention module (UAM) that exploits box confidence maps
is proposed to improve performance through feature refinement. Experiments
using the MS COCO benchmark show that our UADET consistently surpasses baseline
FCOS, and that our best model, ResNext-64x4d-101-DCN, obtains a single model,
single-scale AP of 48.3% on COCO test-dev, thus achieving the state-of-the-art
among various object detectors.
Related papers
- Exploiting Low-confidence Pseudo-labels for Source-free Object Detection [54.98300313452037]
Source-free object detection (SFOD) aims to adapt a source-trained detector to an unlabeled target domain without access to the labeled source data.
Current SFOD methods utilize a threshold-based pseudo-label approach in the adaptation phase.
We propose a new approach to take full advantage of pseudo-labels by introducing high and low confidence thresholds.
arXiv Detail & Related papers (2023-10-19T12:59:55Z) - FDINet: Protecting against DNN Model Extraction via Feature Distortion
Index [31.094958502555503]
FDINET is a novel defense mechanism that leverages the feature distribution of deep neural network (DNN) models.
It exploits FDI similarity to identify colluding adversaries from distributed extraction attacks.
FDINET exhibits the capability to identify colluding adversaries with an accuracy exceeding 91%.
arXiv Detail & Related papers (2023-06-20T07:14:37Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - ConfMix: Unsupervised Domain Adaptation for Object Detection via
Confidence-based Mixing [32.679280923208715]
Unsupervised Domain Adaptation (UDA) for object detection aims to adapt a model trained on a source domain to detect instances from a new target domain for which annotations are not available.
We propose ConfMix, the first method that introduces a sample mixing strategy based on region-level detection confidence for adaptive object detector learning.
arXiv Detail & Related papers (2022-10-20T19:16:39Z) - Distribution Calibration for Out-of-Domain Detection with Bayesian
Approximation [35.34001858858684]
Out-of-Domain (OOD) detection is a key component in a task-oriented dialog system.
Previous softmax-based detection algorithms are proved to be overconfident for OOD samples.
We propose a Bayesian OOD detection framework to calibrate distribution uncertainty using Monte-Carlo Dropout.
arXiv Detail & Related papers (2022-09-14T13:04:09Z) - Gradient-Based Quantification of Epistemic Uncertainty for Deep Object
Detectors [8.029049649310213]
We introduce novel gradient-based uncertainty metrics and investigate them for different object detection architectures.
Experiments show significant improvements in true positive / false positive discrimination and prediction of intersection over union.
We also find improvement over Monte-Carlo dropout uncertainty metrics and further significant boosts by aggregating different sources of uncertainty metrics.
arXiv Detail & Related papers (2021-07-09T16:04:11Z) - Probabilistic Ranking-Aware Ensembles for Enhanced Object Detections [50.096540945099704]
We propose a novel ensemble called the Probabilistic Ranking Aware Ensemble (PRAE) that refines the confidence of bounding boxes from detectors.
We also introduce a bandit approach to address the confidence imbalance problem caused by the need to deal with different numbers of boxes.
arXiv Detail & Related papers (2021-05-07T09:37:06Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.