Unaligned but Safe -- Formally Compensating Performance Limitations for
Imprecise 2D Object Detection
- URL: http://arxiv.org/abs/2202.05123v1
- Date: Thu, 10 Feb 2022 16:17:30 GMT
- Title: Unaligned but Safe -- Formally Compensating Performance Limitations for
Imprecise 2D Object Detection
- Authors: Tobias Schuster, Emmanouil Seferis, Simon Burton, Chih-Hong Cheng
- Abstract summary: We consider the imperfection within machine learning-based 2D object detection and its impact on safety.
We formally prove the minimum required bounding box enlargement factor to cover the ground truth.
We then demonstrate that the factor can be mathematically adjusted to a smaller value, provided that the motion planner takes a fixed-length buffer in making its decisions.
- Score: 0.34410212782758043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the imperfection within machine learning-based 2D
object detection and its impact on safety. We address a special sub-type of
performance limitations: the prediction bounding box cannot be perfectly
aligned with the ground truth, but the computed Intersection-over-Union metric
is always larger than a given threshold. Under such type of performance
limitation, we formally prove the minimum required bounding box enlargement
factor to cover the ground truth. We then demonstrate that the factor can be
mathematically adjusted to a smaller value, provided that the motion planner
takes a fixed-length buffer in making its decisions. Finally, observing the
difference between an empirically measured enlargement factor and our formally
derived worst-case enlargement factor offers an interesting connection between
the quantitative evidence (demonstrated by statistics) and the qualitative
evidence (demonstrated by worst-case analysis).
Related papers
- How Likely Are You to Observe Non-locality with Imperfect Detection Efficiency and Random Measurement Settings? [0.0]
Imperfect detection efficiency remains one of the major obstacles in achieving loophole-free Bell tests over long distances.
We examine the impact of limited detection efficiency on the probability of Bell inequality with Haar random measurement settings.
We show that the so-called typicality of Bell inequality violation holds even if the detection efficiency is limited.
arXiv Detail & Related papers (2025-03-27T14:08:50Z) - Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction [44.83236260638115]
We leverage conformal prediction to obtain uncertainty intervals with guaranteed coverage for object bounding boxes.
One challenge in doing so is that bounding box predictions are conditioned on the object's class label.
We develop a novel two-step conformal approach that propagates uncertainty in predicted class labels into the uncertainty intervals of bounding boxes.
arXiv Detail & Related papers (2024-03-12T02:45:24Z) - Disentangled Representation Learning with Transmitted Information Bottleneck [57.22757813140418]
We present textbfDisTIB (textbfTransmitted textbfInformation textbfBottleneck for textbfDisd representation learning), a novel objective that navigates the balance between information compression and preservation.
arXiv Detail & Related papers (2023-11-03T03:18:40Z) - Predicting Emergent Abilities with Infinite Resolution Evaluation [85.89911520190711]
We introduce PassUntil, an evaluation strategy with theoretically infinite resolution, through massive sampling in the decoding phase.
We predict the performance of the 2.4B model on code generation with merely 0.05% deviation before training starts.
We identify a kind of accelerated emergence whose scaling curve cannot be fitted by standard scaling law function.
arXiv Detail & Related papers (2023-10-05T02:35:00Z) - Expressive Losses for Verified Robustness via Convex Combinations [67.54357965665676]
We study the relationship between the over-approximation coefficient and performance profiles across different expressive losses.
We show that, while expressivity is essential, better approximations of the worst-case loss are not necessarily linked to superior robustness-accuracy trade-offs.
arXiv Detail & Related papers (2023-05-23T12:20:29Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Distributionally robust risk evaluation with a causality constraint and structural information [0.0]
We approximate test functions by neural networks and prove the sample complexity with Rademacher complexity.
Our framework outperforms the classic counterparts in the distributionally robust portfolio selection problem.
arXiv Detail & Related papers (2022-03-20T14:48:37Z) - Taming Adversarial Robustness via Abstaining [7.1975923901054575]
We consider a binary classification problem where the observations can be perturbed by an adversary.
We include an abstaining option, where the classifier abstains from taking a decision when it has low confidence about the prediction.
We show that there exist a tradeoff between the two metrics regardless of what method is used to choose the abstaining region.
arXiv Detail & Related papers (2021-04-06T07:36:48Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - Towards Better Performance and More Explainable Uncertainty for 3D
Object Detection of Autonomous Vehicles [33.0319422469465]
We propose a novel form of the loss function to increase the performance of LiDAR-based 3d object detection.
With the new loss function, the performance of our method on the val split of KITTI dataset shows up to a 15% increase in terms of Average Precision.
arXiv Detail & Related papers (2020-06-22T05:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.