A Global Model Approach to Robust Few-Shot SAR Automatic Target
Recognition
- URL: http://arxiv.org/abs/2303.10800v1
- Date: Mon, 20 Mar 2023 00:24:05 GMT
- Title: A Global Model Approach to Robust Few-Shot SAR Automatic Target
Recognition
- Authors: Nathan Inkawhich
- Abstract summary: It may not always be possible to collect hundreds of labeled samples per class for training deep learning-based SAR Automatic Target Recognition (ATR) models.
This work specifically tackles the few-shot SAR ATR problem, where only a handful of labeled samples may be available to support the task of interest.
- Score: 6.260916845720537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In real-world scenarios, it may not always be possible to collect hundreds of
labeled samples per class for training deep learning-based SAR Automatic Target
Recognition (ATR) models. This work specifically tackles the few-shot SAR ATR
problem, where only a handful of labeled samples may be available to support
the task of interest. Our approach is composed of two stages. In the first, a
global representation model is trained via self-supervised learning on a large
pool of diverse and unlabeled SAR data. In the second stage, the global model
is used as a fixed feature extractor and a classifier is trained to partition
the feature space given the few-shot support samples, while simultaneously
being calibrated to detect anomalous inputs. Unlike competing approaches which
require a pristine labeled dataset for pretraining via meta-learning, our
approach learns highly transferable features from unlabeled data that have
little-to-no relation to the downstream task. We evaluate our method in
standard and extended MSTAR operating conditions and find it to achieve high
accuracy and robust out-of-distribution detection in many different few-shot
settings. Our results are particularly significant because they show the merit
of a global model approach to SAR ATR, which makes minimal assumptions, and
provides many axes for extendability.
Related papers
- Learning from Different Samples: A Source-free Framework for Semi-supervised Domain Adaptation [20.172605920901777]
This paper focuses on designing a framework to use different strategies for comprehensively mining different target samples.
We propose a novel source-free framework (SOUF) to achieve semi-supervised fine-tuning of the source pre-trained model on the target domain.
arXiv Detail & Related papers (2024-11-11T02:09:32Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - SARatrX: Towards Building A Foundation Model for SAR Target Recognition [22.770010893572973]
We make the first attempt towards building a foundation model for SAR ATR, termed SARatrX.
SARatrX learns generalizable representations via self-supervised learning (SSL) and provides a basis for label-efficient model adaptation to generic SAR target detection and classification tasks.
Specifically, SARatrX is trained on 0.18 M unlabelled SAR target samples, which are curated by combining contemporary benchmarks and constitute the largest publicly available dataset till now.
arXiv Detail & Related papers (2024-05-15T14:17:44Z) - Combining X-Vectors and Bayesian Batch Active Learning: Two-Stage Active Learning Pipeline for Speech Recognition [0.0]
This paper introduces a novel two-stage active learning pipeline for automatic speech recognition (ASR)
The first stage utilizes unsupervised AL by using x-vectors clustering for diverse sample selection from unlabeled speech data.
The second stage incorporates a supervised AL strategy, with a batch AL method specifically developed for ASR.
arXiv Detail & Related papers (2024-05-03T19:24:41Z) - GenCo: An Auxiliary Generator from Contrastive Learning for Enhanced
Few-Shot Learning in Remote Sensing [9.504503675097137]
We introduce a generator-based contrastive learning framework (GenCo) that pre-trains backbones and simultaneously explores variants of feature samples.
In fine-tuning, the auxiliary generator can be used to enrich limited labeled data samples in feature space.
We demonstrate the effectiveness of our method in improving few-shot learning performance on two key remote sensing datasets.
arXiv Detail & Related papers (2023-07-27T03:59:19Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Adaptive Prototypical Networks with Label Words and Joint Representation
Learning for Few-Shot Relation Classification [17.237331828747006]
This work focuses on few-shot relation classification (FSRC)
We propose an adaptive mixture mechanism to add label words to the representation of the class prototype.
Experiments have been conducted on FewRel under different few-shot (FS) settings.
arXiv Detail & Related papers (2021-01-10T11:25:42Z) - AutoAssign: Differentiable Label Assignment for Dense Object Detection [94.24431503373884]
Auto COCO is an anchor-free detector for object detection.
It achieves appearance-aware through a fully differentiable weighting mechanism.
Our best model achieves 52.1% AP, outperforming all existing one-stage detectors.
arXiv Detail & Related papers (2020-07-07T14:32:21Z) - UniT: Unified Knowledge Transfer for Any-shot Object Detection and
Segmentation [52.487469544343305]
Methods for object detection and segmentation rely on large scale instance-level annotations for training.
We propose an intuitive and unified semi-supervised model that is applicable to a range of supervision.
arXiv Detail & Related papers (2020-06-12T22:45:47Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.