Combining Deep Learning and Verification for Precise Object Instance
Detection
- URL: http://arxiv.org/abs/1912.12270v4
- Date: Mon, 29 Jun 2020 15:18:52 GMT
- Title: Combining Deep Learning and Verification for Precise Object Instance
Detection
- Authors: Siddharth Ancha, Junyu Nan, David Held
- Abstract summary: We develop a set of verification tests which a proposed detection must pass to be accepted.
We show that these tests can improve the overall accuracy of a base detector and that accepted examples are highly likely to be correct.
This allows the detector to operate in a high precision regime and can thus be used for robotic perception systems.
- Score: 13.810783248835186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning object detectors often return false positives with very high
confidence. Although they optimize generic detection performance, such as mean
average precision (mAP), they are not designed for reliability. For a reliable
detection system, if a high confidence detection is made, we would want high
certainty that the object has indeed been detected. To achieve this, we have
developed a set of verification tests which a proposed detection must pass to
be accepted. We develop a theoretical framework which proves that, under
certain assumptions, our verification tests will not accept any false
positives. Based on an approximation to this framework, we present a practical
detection system that can verify, with high precision, whether each detection
of a machine-learning based object detector is correct. We show that these
tests can improve the overall accuracy of a base detector and that accepted
examples are highly likely to be correct. This allows the detector to operate
in a high precision regime and can thus be used for robotic perception systems
as a reliable instance detection method. Code is available at
https://github.com/siddancha/FlowVerify.
Related papers
- Lie Detector: Unified Backdoor Detection via Cross-Examination Framework [68.45399098884364]
We propose a unified backdoor detection framework in the semi-honest setting.
Our method achieves superior detection performance, improving accuracy by 5.4%, 1.6%, and 11.9% over SoTA baselines.
Notably, it is the first to effectively detect backdoors in multimodal large language models.
arXiv Detail & Related papers (2025-03-21T06:12:06Z) - DeePen: Penetration Testing for Audio Deepfake Detection [6.976070957821282]
Deepfakes pose significant security risks to individuals, organizations, and society at large.
We introduce a systematic testing methodology, which we call DeePen.
Our approach operates without prior knowledge of or access to the target deepfake detection models.
arXiv Detail & Related papers (2025-02-27T12:26:25Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Towards Building Self-Aware Object Detectors via Reliable Uncertainty
Quantification and Calibration [17.461451218469062]
In this work, we introduce the Self-Aware Object Detection (SAOD) task.
The SAOD task respects and adheres to the challenges that object detectors face in safety-critical environments such as autonomous driving.
We extensively use our framework, which introduces novel metrics and large scale test datasets, to test numerous object detectors.
arXiv Detail & Related papers (2023-07-03T11:16:39Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Object Detection as Probabilistic Set Prediction [3.7599363231894176]
We present a proper scoring rule for evaluating and training probabilistic object detectors.
Our results indicate that the training of existing detectors is optimized toward non-probabilistic metrics.
arXiv Detail & Related papers (2022-03-15T15:13:52Z) - The Box Size Confidence Bias Harms Your Object Detector [7.445987710491257]
We show that conditional confidence bias is harming the expected performance of object detectors.
Specifically, we demonstrate how to modify the histogram binning calibration to not only avoid performance impairment but also improve performance.
We show improvements of up to 0.6 mAP and 0.8 mAP50 without extra data or training.
arXiv Detail & Related papers (2021-12-03T13:32:04Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Learn what you can't learn: Regularized Ensembles for Transductive
Out-of-distribution Detection [76.39067237772286]
We show that current out-of-distribution (OOD) detection algorithms for neural networks produce unsatisfactory results in a variety of OOD detection scenarios.
This paper studies how such "hard" OOD scenarios can benefit from adjusting the detection method after observing a batch of the test data.
We propose a novel method that uses an artificial labeling scheme for the test data and regularization to obtain ensembles of models that produce contradictory predictions only on the OOD samples in a test batch.
arXiv Detail & Related papers (2020-12-10T16:55:13Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Detection as Regression: Certified Object Detection by Median Smoothing [50.89591634725045]
This work is motivated by recent progress on certified classification by randomized smoothing.
We obtain the first model-agnostic, training-free, and certified defense for object detection against $ell$-bounded attacks.
arXiv Detail & Related papers (2020-07-07T18:40:19Z) - Seeing without Looking: Contextual Rescoring of Object Detections for AP
Maximization [4.346179456029563]
We propose to incorporate context in object detection by post-processing the output of an arbitrary detector.
Rescoring is done by conditioning on contextual information from the entire set of detections.
We show that AP can be improved by simply reassigning the detection confidence values.
arXiv Detail & Related papers (2019-12-27T18:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.