Specific Emitter Identification via Active Learning
- URL: http://arxiv.org/abs/2601.04502v1
- Date: Thu, 08 Jan 2026 02:16:04 GMT
- Title: Specific Emitter Identification via Active Learning
- Authors: Jingyi Wang, Fanggang Wang,
- Abstract summary: We propose an SEI approach enhanced by active learning (AL), which follows a three-stage semi-supervised training scheme.<n>In the first stage, self-supervised contrastive learning is employed with a dynamic dictionary update mechanism to extract robust representations from the unlabeled data.<n>In the second stage, supervised training on a small labeled dataset is performed, where the contrastive and cross-entropy losses are jointly optimized to improve the feature separability.<n>In the third stage, an AL module selects the most valuable samples from the unlabeled data for annotation based on the uncertainty and representativeness criteria.
- Score: 15.523945888913941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid growth of wireless communications, specific emitter identification (SEI) is significant for communication security. However, its model training relies heavily on the large-scale labeled data, which are costly and time-consuming to obtain. To address this challenge, we propose an SEI approach enhanced by active learning (AL), which follows a three-stage semi-supervised training scheme. In the first stage, self-supervised contrastive learning is employed with a dynamic dictionary update mechanism to extract robust representations from large amounts of the unlabeled data. In the second stage, supervised training on a small labeled dataset is performed, where the contrastive and cross-entropy losses are jointly optimized to improve the feature separability and strengthen the classification boundaries. In the third stage, an AL module selects the most valuable samples from the unlabeled data for annotation based on the uncertainty and representativeness criteria, further enhancing generalization under limited labeling budgets. Experimental results on the ADS-B and WiFi datasets demonstrate that the proposed SEI approach significantly outperforms the conventional supervised and semi-supervised methods under limited annotation conditions, achieving higher recognition accuracy with lower labeling cost.
Related papers
- Hierarchical Semi-Supervised Active Learning for Remote Sensing [20.305328912143054]
We propose a Hierarchical Semi-Supervised Active Learning (HSSAL) framework that integrates semi-supervised learning (SSL) and a novel hierarchical active learning (HAL)<n>In each iteration, SSL refines the model using both labeled data through supervised learning and unlabeled data via weak-to-strong self-training, improving feature representation and uncertainty estimation.<n> experiments on three benchmark RS scene classification datasets, including UCM, AID, and NWPU-RESISC45, demonstrate HSSAL consistently outperforms SSL- or AL-only baselines.
arXiv Detail & Related papers (2025-11-22T13:25:42Z) - ConformalSAM: Unlocking the Potential of Foundational Segmentation Models in Semi-Supervised Semantic Segmentation with Conformal Prediction [57.930531826380836]
This work explores whether a foundational segmentation model can address label scarcity in the pixel-level vision task as an annotator for unlabeled images.<n>We propose ConformalSAM, a novel SSSS framework which first calibrates the foundation model using the target domain's labeled data and then filters out unreliable pixel labels of unlabeled data.
arXiv Detail & Related papers (2025-07-21T17:02:57Z) - The Achilles Heel of AI: Fundamentals of Risk-Aware Training Data for High-Consequence Models [0.0]
AI systems in high-consequence domains must detect rare, high-impact events while operating under tight resource constraints.<n>Traditional annotation strategies that prioritize label volume over informational value introduce redundancy and noise.<n>This paper introduces smart-sizing, a training data strategy that emphasizes label diversity, model-guided selection, and marginal utility-based stopping.
arXiv Detail & Related papers (2025-05-20T22:57:35Z) - Enhancing Classification with Semi-Supervised Deep Learning Using Distance-Based Sample Weights [0.0]
This work proposes a semi-supervised framework that prioritizes training samples based on their proximity to test data.<n> Experiments on twelve benchmark datasets demonstrate significant improvements across key metrics, including accuracy, precision, and recall.<n>This framework provides a robust and practical solution for semi-supervised learning, with potential applications in domains such as healthcare and security.
arXiv Detail & Related papers (2025-05-20T13:29:04Z) - Feedback-Driven Pseudo-Label Reliability Assessment: Redefining Thresholding for Semi-Supervised Semantic Segmentation [5.7977777220041204]
A common practice in pseudo-supervision is filtering pseudo-labels based on pre-defined confidence thresholds or entropy.<n>We propose Ensemble-of-Confidence Reinforcement (ENCORE), a dynamic feedback-driven thresholding strategy for pseudo-label selection.<n>Our method seamlessly integrates into existing pseudo-supervision frameworks and significantly improves segmentation performance.
arXiv Detail & Related papers (2025-05-12T15:58:08Z) - 3DResT: A Strong Baseline for Semi-Supervised 3D Referring Expression Segmentation [73.877177695218]
3D Referring Expression (3D-RES) typically requires extensive instance-level annotations, which are time-consuming and costly.<n>Semi-supervised learning (SSL) mitigates this by using limited labeled data alongside abundant unlabeled data, improving performance while reducing annotation costs.<n>In this paper, we introduce the first semi-supervised learning framework for 3D-RES, presenting a robust baseline method named 3DResT.
arXiv Detail & Related papers (2025-04-17T02:50:52Z) - ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - Incremental Self-training for Semi-supervised Learning [56.57057576885672]
IST is simple yet effective and fits existing self-training-based semi-supervised learning methods.
We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed.
arXiv Detail & Related papers (2024-04-14T05:02:00Z) - Implicit Counterfactual Data Augmentation for Robust Learning [24.795542869249154]
This study proposes an Implicit Counterfactual Data Augmentation method to remove spurious correlations and make stable predictions.<n>Experiments have been conducted across various biased learning scenarios covering both image and text datasets.
arXiv Detail & Related papers (2023-04-26T10:36:40Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Open-Set Semi-Supervised Learning for 3D Point Cloud Understanding [62.17020485045456]
It is commonly assumed in semi-supervised learning (SSL) that the unlabeled data are drawn from the same distribution as that of the labeled ones.
We propose to selectively utilize unlabeled data through sample weighting, so that only conducive unlabeled data would be prioritized.
arXiv Detail & Related papers (2022-05-02T16:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.