CFARnet: deep learning for target detection with constant false alarm
rate
- URL: http://arxiv.org/abs/2208.02474v3
- Date: Wed, 15 Nov 2023 08:35:24 GMT
- Title: CFARnet: deep learning for target detection with constant false alarm
rate
- Authors: Tzvi Diskin, Yiftach Beer, Uri Okun and Ami Wiesel
- Abstract summary: We introduce a framework of CFAR constrained detectors.
Practically, we develop a deep learning framework for fitting neural networks that approximate it.
Experiments of target detection in different setting demonstrate that the proposed CFARnet allows a flexible tradeoff between CFAR and accuracy.
- Score: 2.2940141855172036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of target detection with a constant false alarm rate
(CFAR). This constraint is crucial in many practical applications and is a
standard requirement in classical composite hypothesis testing. In settings
where classical approaches are computationally expensive or where only data
samples are given, machine learning methodologies are advantageous. CFAR is
less understood in these settings. To close this gap, we introduce a framework
of CFAR constrained detectors. Theoretically, we prove that a CFAR constrained
Bayes optimal detector is asymptotically equivalent to the classical
generalized likelihood ratio test (GLRT). Practically, we develop a deep
learning framework for fitting neural networks that approximate it. Experiments
of target detection in different setting demonstrate that the proposed CFARnet
allows a flexible tradeoff between CFAR and accuracy.
Related papers
- Unfolding Target Detection with State Space Model [8.493729039825332]
We introduce a novel method that combines signal processing and deep learning by unfolding the CFAR detector with a state space model architecture.
By reserving the CFAR pipeline yet turning its sophisticated configurations into trainable parameters, our method achieves high detection performance without manual parameter tuning.
The results highlight the remarkable performance of the proposed method, outperforming CFAR and its variants by 10X in detection rate and false alarm rate.
arXiv Detail & Related papers (2024-10-30T07:43:18Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - OpenAUC: Towards AUC-Oriented Open-Set Recognition [151.5072746015253]
Traditional machine learning follows a close-set assumption that the training and test set share the same label space.
Open-Set Recognition (OSR) aims to make correct predictions on both close-set samples and open-set samples.
To fix these issues, we propose a novel metric named OpenAUC.
arXiv Detail & Related papers (2022-10-22T08:54:15Z) - Hierarchical Semi-Supervised Contrastive Learning for
Contamination-Resistant Anomaly Detection [81.07346419422605]
Anomaly detection aims at identifying deviant samples from the normal data distribution.
Contrastive learning has provided a successful way to sample representation that enables effective discrimination on anomalies.
We propose a novel hierarchical semi-supervised contrastive learning framework, for contamination-resistant anomaly detection.
arXiv Detail & Related papers (2022-07-24T18:49:26Z) - Towards Accurate Open-Set Recognition via Background-Class
Regularization [36.96359929574601]
In open-set recognition (OSR), classifiers should be able to reject unknown-class samples while maintaining high closed-set classification accuracy.
Previous studies attempted to limit latent feature space and reject data located outside the limited space via offline analyses.
We propose a simple inference process (without offline analyses) to conduct OSR in standard classifier architectures.
We show that the proposed method provides robust OSR results, while maintaining high closed-set classification accuracy.
arXiv Detail & Related papers (2022-07-21T03:55:36Z) - Learning to Detect with Constant False Alarm Rate [2.2559617939136505]
We consider the use of machine learning for hypothesis testing with an emphasis on target detection.
We propose to add a term to the loss function that promotes similar distributions of the detector under any null hypothesis scenario.
arXiv Detail & Related papers (2022-06-12T14:32:40Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - Adversarially Robust Classification based on GLRT [26.44693169694826]
We show a defense strategy based on the generalized likelihood ratio test (GLRT), which jointly estimates the class of interest and the adversarial perturbation.
We show that the GLRT approach yields performance competitive with that of the minimax approach under the worst-case attack.
We also observe that the GLRT defense generalizes naturally to more complex models for which optimal minimax classifiers are not known.
arXiv Detail & Related papers (2020-11-16T10:16:05Z) - Discriminative Nearest Neighbor Few-Shot Intent Detection by
Transferring Natural Language Inference [150.07326223077405]
Few-shot learning is attracting much attention to mitigate data scarcity.
We present a discriminative nearest neighbor classification with deep self-attention.
We propose to boost the discriminative ability by transferring a natural language inference (NLI) model.
arXiv Detail & Related papers (2020-10-25T00:39:32Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - A Study on Evaluation Standard for Automatic Crack Detection Regard the
Random Fractal [15.811209242988257]
We find that automatic crack detectors based on deep learning are obviously underestimated by the widely used mean Average Precision (mAP) standard.
As a solution, a fractal-available evaluation standard named CovEval is proposed to correct the underestimation in crack detection.
In experiments using several common frameworks for object detection, models get much higher scores in crack detection according to CovEval.
arXiv Detail & Related papers (2020-07-23T15:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.