Semi-Supervised SAR ATR Framework with Transductive Auxiliary
Segmentation
- URL: http://arxiv.org/abs/2308.16633v1
- Date: Thu, 31 Aug 2023 11:00:05 GMT
- Title: Semi-Supervised SAR ATR Framework with Transductive Auxiliary
Segmentation
- Authors: Chenwei Wang, Xiaoyu Liu, Yulin Huang, Siyi Luo, Jifang Pei, Jianyu
Yang, Deqing Mao
- Abstract summary: We propose a Semi-supervised SAR Framework with transductive Auxiliary ATR (SFAS)
SFAS focuses on exploiting the transductive generalization on available unlabeled samples with an auxiliary loss serving as a regularizer.
The recognition performance of 94.18% can be achieved under 20 training samples in each class with simultaneous accurate segmentation results.
- Score: 16.65792542181861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) have achieved high performance in
synthetic aperture radar (SAR) automatic target recognition (ATR). However, the
performance of CNNs depends heavily on a large amount of training data. The
insufficiency of labeled training SAR images limits the recognition performance
and even invalidates some ATR methods. Furthermore, under few labeled training
data, many existing CNNs are even ineffective. To address these challenges, we
propose a Semi-supervised SAR ATR Framework with transductive Auxiliary
Segmentation (SFAS). The proposed framework focuses on exploiting the
transductive generalization on available unlabeled samples with an auxiliary
loss serving as a regularizer. Through auxiliary segmentation of unlabeled SAR
samples and information residue loss (IRL) in training, the framework can
employ the proposed training loop process and gradually exploit the information
compilation of recognition and segmentation to construct a helpful inductive
bias and achieve high performance. Experiments conducted on the MSTAR dataset
have shown the effectiveness of our proposed SFAS for few-shot learning. The
recognition performance of 94.18\% can be achieved under 20 training samples in
each class with simultaneous accurate segmentation results. Facing variances of
EOCs, the recognition ratios are higher than 88.00\% when 10 training samples
each class.
Related papers
- GLRT-Based Metric Learning for Remote Sensing Object Retrieval [19.210692452537007]
Existing CBRSOR methods neglect the utilization of global statistical information during both training and test stages.
Inspired by the Neyman-Pearson theorem, we propose a generalized likelihood ratio test-based metric learning (GLRTML) approach.
arXiv Detail & Related papers (2024-10-08T07:53:30Z) - Weakly Contrastive Learning via Batch Instance Discrimination and Feature Clustering for Small Sample SAR ATR [7.2932563202952725]
We propose a novel framework named Batch Instance Discrimination and Feature Clustering (BIDFC)
In this framework, embedding distance between samples should be moderate because of the high similarity between samples in the SAR images.
Experimental results on the moving and stationary target acquisition and recognition (MSTAR) database indicate a 91.25% classification accuracy of our method fine-tuned on only 3.13% training data.
arXiv Detail & Related papers (2024-08-07T08:39:33Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning [50.809769498312434]
We propose a novel dataset pruning method termed as Temporal Dual-Depth Scoring (TDDS)
Our method achieves 54.51% accuracy with only 10% training data, surpassing random selection by 7.83% and other comparison methods by at least 12.69%.
arXiv Detail & Related papers (2023-11-22T03:45:30Z) - Disentangled Representation Learning for RF Fingerprint Extraction under
Unknown Channel Statistics [77.13542705329328]
We propose a framework of disentangled representation learning(DRL) that first learns to factor the input signals into a device-relevant component and a device-irrelevant component via adversarial learning.
The implicit data augmentation in the proposed framework imposes a regularization on the RFF extractor to avoid the possible overfitting of device-irrelevant channel statistics.
Experiments validate that the proposed approach, referred to as DR-RFF, outperforms conventional methods in terms of generalizability to unknown complicated propagation environments.
arXiv Detail & Related papers (2022-08-04T15:46:48Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Unsupervised Class-Incremental Learning Through Confusion [0.4604003661048266]
We introduce a novelty detection method that leverages network confusion caused by training incoming data as a new class.
We found that incorporating a class-imbalance during this detection method substantially enhances performance.
arXiv Detail & Related papers (2021-04-09T15:58:43Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.