Benchmarking Deep Learning Classifiers for SAR Automatic Target
Recognition
- URL: http://arxiv.org/abs/2312.06940v1
- Date: Tue, 12 Dec 2023 02:20:39 GMT
- Title: Benchmarking Deep Learning Classifiers for SAR Automatic Target
Recognition
- Authors: Jacob Fein-Ashley, Tian Ye, Rajgopal Kannan, Viktor Prasanna, Carl
Busart
- Abstract summary: This paper comprehensively benchmarks several advanced deep learning models for SAR ATR with multiple distinct SAR imagery datasets.
We evaluate and compare the five classifiers concerning their classification accuracy runtime performance in terms of inference throughput and analytical performance.
No clear model winner emerges from all of our chosen metrics and a one model rules all case is doubtful in the domain of SAR ATR.
- Score: 7.858656052565242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthetic Aperture Radar SAR Automatic Target Recognition ATR is a key
technique of remote-sensing image recognition which can be supported by deep
neural networks The existing works of SAR ATR mostly focus on improving the
accuracy of the target recognition while ignoring the systems performance in
terms of speed and storage which is critical to real-world applications of SAR
ATR For decision-makers aiming to identify a proper deep learning model to
deploy in a SAR ATR system it is important to understand the performance of
different candidate deep learning models and determine the best model
accordingly This paper comprehensively benchmarks several advanced deep
learning models for SAR ATR with multiple distinct SAR imagery datasets
Specifically we train and test five SAR image classifiers based on Residual
Neural Networks ResNet18 ResNet34 ResNet50 Graph Neural Network GNN and Vision
Transformer for Small-Sized Datasets (SS-ViT) We select three datasets MSTAR
GBSAR and SynthWakeSAR that offer heterogeneity We evaluate and compare the
five classifiers concerning their classification accuracy runtime performance
in terms of inference throughput and analytical performance in terms of number
of parameters number of layers model size and number of operations Experimental
results show that the GNN classifier outperforms with respect to throughput and
latency However it is also shown that no clear model winner emerges from all of
our chosen metrics and a one model rules all case is doubtful in the domain of
SAR ATR
Related papers
- IncSAR: A Dual Fusion Incremental Learning Framework for SAR Target Recognition [7.9330990800767385]
Models' tendency to forget old knowledge when learning new tasks, known as catastrophic forgetting, remains an open challenge.
In this paper, an incremental learning framework, called IncSAR, is proposed to mitigate catastrophic forgetting in SAR target recognition.
IncSAR comprises a Vision Transformer (ViT) and a custom-designed Convolutional Neural Network (CNN) in individual branches combined through a late-fusion strategy.
arXiv Detail & Related papers (2024-10-08T08:49:47Z) - SAFE: a SAR Feature Extractor based on self-supervised learning and masked Siamese ViTs [5.961207817077044]
We propose a novel self-supervised learning framework based on masked Siamese Vision Transformers to create a General SAR Feature Extractor coined SAFE.
Our method leverages contrastive learning principles to train a model on unlabeled SAR data, extracting robust and generalizable features.
We introduce tailored data augmentation techniques specific to SAR imagery, such as sub-aperture decomposition and despeckling.
Our network competes with or surpasses other state-of-the-art methods in few-shot classification and segmentation tasks, even without being trained on the sensors used for the evaluation.
arXiv Detail & Related papers (2024-06-30T23:11:20Z) - Towards SAR Automatic Target Recognition MultiCategory SAR Image Classification Based on Light Weight Vision Transformer [11.983317593939688]
This paper tries to apply a lightweight vision transformer based model to classify SAR images.
The entire structure was verified by an open-accessed SAR data set.
arXiv Detail & Related papers (2024-05-18T11:24:52Z) - SARatrX: Towards Building A Foundation Model for SAR Target Recognition [22.770010893572973]
We make the first attempt towards building a foundation model for SAR ATR, termed SARatrX.
SARatrX learns generalizable representations via self-supervised learning (SSL) and provides a basis for label-efficient model adaptation to generic SAR target detection and classification tasks.
Specifically, SARatrX is trained on 0.18 M unlabelled SAR target samples, which are curated by combining contemporary benchmarks and constitute the largest publicly available dataset till now.
arXiv Detail & Related papers (2024-05-15T14:17:44Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - ADASR: An Adversarial Auto-Augmentation Framework for Hyperspectral and
Multispectral Data Fusion [54.668445421149364]
Deep learning-based hyperspectral image (HSI) super-resolution aims to generate high spatial resolution HSI (HR-HSI) by fusing hyperspectral image (HSI) and multispectral image (MSI) with deep neural networks (DNNs)
In this letter, we propose a novel adversarial automatic data augmentation framework ADASR that automatically optimize and augments HSI-MSI sample pairs to enrich data diversity for HSI-MSI fusion.
arXiv Detail & Related papers (2023-10-11T07:30:37Z) - Remote Sensing Image Classification using Transfer Learning and
Attention Based Deep Neural Network [59.86658316440461]
We propose a deep learning based framework for RSISC, which makes use of the transfer learning technique and multihead attention scheme.
The proposed deep learning framework is evaluated on the benchmark NWPU-RESISC45 dataset and achieves the best classification accuracy of 94.7%.
arXiv Detail & Related papers (2022-06-20T10:05:38Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Open-Set Recognition: A Good Closed-Set Classifier is All You Need [146.6814176602689]
We show that the ability of a classifier to make the 'none-of-above' decision is highly correlated with its accuracy on the closed-set classes.
We use this correlation to boost the performance of the cross-entropy OSR 'baseline' by improving its closed-set accuracy.
We also construct new benchmarks which better respect the task of detecting semantic novelty.
arXiv Detail & Related papers (2021-10-12T17:58:59Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.