Learning to Detect with Constant False Alarm Rate
- URL: http://arxiv.org/abs/2206.05747v1
- Date: Sun, 12 Jun 2022 14:32:40 GMT
- Title: Learning to Detect with Constant False Alarm Rate
- Authors: Tzvi Diskin, Uri Okun, Ami Wiesel
- Abstract summary: We consider the use of machine learning for hypothesis testing with an emphasis on target detection.
We propose to add a term to the loss function that promotes similar distributions of the detector under any null hypothesis scenario.
- Score: 2.2559617939136505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the use of machine learning for hypothesis testing with an
emphasis on target detection. Classical model-based solutions rely on comparing
likelihoods. These are sensitive to imperfect models and are often
computationally expensive. In contrast, data-driven machine learning is often
more robust and yields classifiers with fixed computational complexity. Learned
detectors usually provide high accuracy with low complexity but do not have a
constant false alarm rate (CFAR) as required in many applications. To close
this gap, we propose to add a term to the loss function that promotes similar
distributions of the detector under any null hypothesis scenario. Experiments
show that our approach leads to near CFAR detectors with similar accuracy as
their competitors.
Related papers
- Can I trust my anomaly detection system? A case study based on explainable AI [0.4416503115535552]
This case study explores the robustness of an anomaly detection system based on variational autoencoder generative models.
The goal is to get a different perspective on the real performances of anomaly detectors that use reconstruction differences.
arXiv Detail & Related papers (2024-07-29T12:39:07Z) - Is K-fold cross validation the best model selection method for Machine
Learning? [0.0]
K-fold cross-validation is the most common approach to ascertaining the likelihood that a machine learning outcome is generated by chance.
A novel test based on K-fold CV and the Upper Bound of the actual error (K-fold CUBV) is composed.
arXiv Detail & Related papers (2024-01-29T18:46:53Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning [58.85063149619348]
We propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows.
Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets.
arXiv Detail & Related papers (2023-01-25T16:34:43Z) - Few-shot Object Detection with Refined Contrastive Learning [4.520231308678286]
We propose a novel few-shot object detection (FSOD) method with Refined Contrastive Learning (FSRC)
A pre-determination component is introduced to find out the Resemblance Group from novel classes which contains confusable classes.
RCL is pointedly performed on this group of classes in order to increase the inter-class distances among them.
arXiv Detail & Related papers (2022-11-24T09:34:20Z) - CFARnet: deep learning for target detection with constant false alarm
rate [2.2940141855172036]
We introduce a framework of CFAR constrained detectors.
Practically, we develop a deep learning framework for fitting neural networks that approximate it.
Experiments of target detection in different setting demonstrate that the proposed CFARnet allows a flexible tradeoff between CFAR and accuracy.
arXiv Detail & Related papers (2022-08-04T05:54:36Z) - Discriminative Nearest Neighbor Few-Shot Intent Detection by
Transferring Natural Language Inference [150.07326223077405]
Few-shot learning is attracting much attention to mitigate data scarcity.
We present a discriminative nearest neighbor classification with deep self-attention.
We propose to boost the discriminative ability by transferring a natural language inference (NLI) model.
arXiv Detail & Related papers (2020-10-25T00:39:32Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.