Accurate and Reliable Methods for 5G UAV Jamming Identification With
Calibrated Uncertainty
- URL: http://arxiv.org/abs/2211.02924v1
- Date: Sat, 5 Nov 2022 15:04:45 GMT
- Title: Accurate and Reliable Methods for 5G UAV Jamming Identification With
Calibrated Uncertainty
- Authors: Hamed Farkhari, Joseanne Viana, Pedro Sebastiao, Luis Miguel Campos,
Luis Bernardo, Rui Dinis, Sarang Kahvazadeh
- Abstract summary: Only increasing accuracy without considering uncertainty may negatively impact Deep Neural Network (DNN) decision-making.
This paper proposes five combined preprocessing and post-processing methods for time-series binary classification problems.
- Score: 3.4208659698673127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Only increasing accuracy without considering uncertainty may negatively
impact Deep Neural Network (DNN) decision-making and decrease its reliability.
This paper proposes five combined preprocessing and post-processing methods for
time-series binary classification problems that simultaneously increase the
accuracy and reliability of DNN outputs applied in a 5G UAV security dataset.
These techniques use DNN outputs as input parameters and process them in
different ways. Two methods use a well-known Machine Learning (ML) algorithm as
a complement, and the other three use only confidence values that the DNN
estimates. We compare seven different metrics, such as the Expected Calibration
Error (ECE), Maximum Calibration Error (MCE), Mean Confidence (MC), Mean
Accuracy (MA), Normalized Negative Log Likelihood (NLL), Brier Score Loss
(BSL), and Reliability Score (RS) and the tradeoffs between them to evaluate
the proposed hybrid algorithms. First, we show that the eXtreme Gradient
Boosting (XGB) classifier might not be reliable for binary classification under
the conditions this work presents. Second, we demonstrate that at least one of
the potential methods can achieve better results than the classification in the
DNN softmax layer. Finally, we show that the prospective methods may improve
accuracy and reliability with better uncertainty calibration based on the
assumption that the RS determines the difference between MC and MA metrics, and
this difference should be zero to increase reliability. For example, Method 3
presents the best RS of 0.65 even when compared to the XGB classifier, which
achieves RS of 7.22.
Related papers
- Enhancing Reliability of Neural Networks at the Edge: Inverted
Normalization with Stochastic Affine Transformations [0.22499166814992438]
We propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in in-memory computing architectures.
Empirical results show a graceful degradation in inference accuracy, with an improvement of up to $58.11%$.
arXiv Detail & Related papers (2024-01-23T00:27:31Z) - Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation [19.065904250532995]
We propose Provably RObust and PLAusible Counterfactual Explanations (PROPLACE)
We formulate an iterative algorithm to compute provably robust CEs and prove its convergence, soundness and completeness.
We show that PROPLACE achieves state-of-the-art performances against metrics on three evaluation aspects.
arXiv Detail & Related papers (2023-09-22T00:12:09Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Improving Uncertainty Calibration of Deep Neural Networks via Truth
Discovery and Geometric Optimization [22.57474734944132]
We propose a truth discovery framework to integrate ensemble-based and post-hoc calibration methods.
On large-scale datasets including CIFAR and ImageNet, our method shows consistent improvement against state-of-the-art calibration approaches.
arXiv Detail & Related papers (2021-06-25T06:44:16Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Detecting Misclassification Errors in Neural Networks with a Gaussian
Process Model [20.948038514886377]
This paper presents a new framework that produces a quantitative metric for detecting misclassification errors.
The framework, RED, builds an error detector on top of the base classifier and estimates uncertainty of the detection scores using Gaussian Processes.
arXiv Detail & Related papers (2020-10-05T15:01:30Z) - Certifying Confidence via Randomized Smoothing [151.67113334248464]
Randomized smoothing has been shown to provide good certified-robustness guarantees for high-dimensional classification problems.
Most smoothing methods do not give us any information about the confidence with which the underlying classifier makes a prediction.
We propose a method to generate certified radii for the prediction confidence of the smoothed classifier.
arXiv Detail & Related papers (2020-09-17T04:37:26Z) - Calibrating Deep Neural Network Classifiers on Out-of-Distribution
Datasets [20.456742449675904]
CCAC (Confidence with an Auxiliary Class) is a new post-hoc confidence calibration method for deep neural network (DNN)
Key novelty of CCAC is an auxiliary class in the calibration model which separates mis-classified samples from correctly classified ones.
Our experiments on different DNN models, datasets and applications show that CCAC can consistently outperform the prior post-hoc calibration methods.
arXiv Detail & Related papers (2020-06-16T04:06:21Z) - Second-Order Provable Defenses against Adversarial Attacks [63.34032156196848]
We show that if the eigenvalues of the network are bounded, we can compute a certificate in the $l$ norm efficiently using convex optimization.
We achieve certified accuracy of 5.78%, and 44.96%, and 43.19% on 2,59% and 4BP-based methods respectively.
arXiv Detail & Related papers (2020-06-01T05:55:18Z) - ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning [91.13797346047984]
We introduce ADAHESSIAN, a second order optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates.
We show that ADAHESSIAN achieves new state-of-the-art results by a large margin as compared to other adaptive optimization methods.
arXiv Detail & Related papers (2020-06-01T05:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.