Multi-Level Heterogeneous Knowledge Transfer Network on Forward Scattering Center Model for Limited Samples SAR ATR
- URL: http://arxiv.org/abs/2509.23596v1
- Date: Sun, 28 Sep 2025 03:04:04 GMT
- Title: Multi-Level Heterogeneous Knowledge Transfer Network on Forward Scattering Center Model for Limited Samples SAR ATR
- Authors: Chenxi Zhao, Daochang Wang, Siqian Zhang, Gangyao Kuang,
- Abstract summary: This work explores a new simulated data to migrate purer and key target knowledge, i.e., forward scattering center model (FSCM)<n>To achieve this purpose, multi-level heterogeneous knowledge transfer network is proposed, which fully migrates FSCM knowledge from the feature, distribution and category levels.<n> Notably, extensive experiments on two new datasets formed by FSCM data and measured SAR images demonstrate the superior performance of our method.
- Score: 10.701687030427422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulated data-assisted SAR target recognition methods are the research hotspot currently, devoted to solving the problem of limited samples. Existing works revolve around simulated images, but the large amount of irrelevant information embedded in the images, such as background, noise, etc., seriously affects the quality of the migrated information. Our work explores a new simulated data to migrate purer and key target knowledge, i.e., forward scattering center model (FSCM) which models the actual local structure of the target with strong physical meaning and interpretability. To achieve this purpose, multi-level heterogeneous knowledge transfer (MHKT) network is proposed, which fully migrates FSCM knowledge from the feature, distribution and category levels, respectively. Specifically, we permit the more suitable feature representations for the heterogeneous data and separate non-informative knowledge by task-associated information selector (TAIS), to complete purer target feature migration. In the distribution alignment, the new metric function maximum discrimination divergence (MDD) in target generic knowledge transfer (TGKT) module perceives transferable knowledge efficiently while preserving discriminative structure about classes. Moreover, category relation knowledge transfer (CRKT) module leverages the category relation consistency constraint to break the dilemma of optimization bias towards simulation data due to imbalance between simulated and measured data. Such stepwise knowledge selection and migration will ensure the integrity of the migrated FSCM knowledge. Notably, extensive experiments on two new datasets formed by FSCM data and measured SAR images demonstrate the superior performance of our method.
Related papers
- Transfer Learning for Benign Overfitting in High-Dimensional Linear Regression [7.414126402359073]
We study the intersection of transfer learning and minimum-$ell$-norm interpolator (MNI) in high-dimensional linear regression.<n>Our research bridges the gap by proposing a novel two-step Transfer MNI approach and analyzing its trade-offs.
arXiv Detail & Related papers (2025-10-17T05:58:16Z) - Quantifying Dataset Similarity to Guide Transfer Learning [1.6328866317851185]
Cross-Learning Score ( CLS) measures dataset similarity through bidirectional performance between domains.<n> CLS can reliably predict whether transfer will improve or degrade performance.<n> CLS is efficient and fast to compute as it bypasses the problem of expensive distribution estimation for high-dimensional problems.
arXiv Detail & Related papers (2025-10-13T00:18:35Z) - SC-GIR: Goal-oriented Semantic Communication via Invariant Representation Learning [59.45312293893698]
Goal-oriented semantic communication (SC) aims to revolutionize communication systems by transmitting only task-essential information.<n>We propose a novel framework called Goal-oriented Invariant Representation-based SC (SC-GIR) for image transmission.
arXiv Detail & Related papers (2025-09-01T04:29:43Z) - Enhancing Scene Classification in Cloudy Image Scenarios: A Collaborative Transfer Method with Information Regulation Mechanism using Optical Cloud-Covered and SAR Remote Sensing Images [6.35948253619752]
This study presents a scene classification transfer method that combines multi-modality data.<n>It aims to transfer the source domain model trained on cloudfree optical data to the target domain that includes both cloudy optical and SAR data at low cost.
arXiv Detail & Related papers (2025-01-08T05:14:36Z) - FTA-FTL: A Fine-Tuned Aggregation Federated Transfer Learning Scheme for Lithology Microscopic Image Classification [4.245694283697248]
This study involves two phases; the first is to conduct Lithology microscopic image classification on a small dataset using transfer learning.<n>In the second phase, we formulated the classification task to a Federated Transfer Learning scheme and proposed a Fine-Tuned Aggregation strategy for Federated Learning (FTA-FTL)<n>The results are in excellent agreement and confirm the efficiency of the proposed scheme, and show that the proposed FTA-FTL algorithm is capable enough to achieve approximately the same results obtained by the centralized implementation for Lithology microscopic images classification task.
arXiv Detail & Related papers (2025-01-06T19:32:14Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.05167902805405]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.<n>The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.<n> MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - Enhancing Information Maximization with Distance-Aware Contrastive
Learning for Source-Free Cross-Domain Few-Shot Learning [55.715623885418815]
Cross-Domain Few-Shot Learning methods require access to source domain data to train a model in the pre-training phase.
Due to increasing concerns about data privacy and the desire to reduce data transmission and training costs, it is necessary to develop a CDFSL solution without accessing source data.
This paper proposes an Enhanced Information Maximization with Distance-Aware Contrastive Learning method to address these challenges.
arXiv Detail & Related papers (2024-03-04T12:10:24Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [55.0981921695672]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.<n>It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.<n>It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Rotated Multi-Scale Interaction Network for Referring Remote Sensing Image Segmentation [63.15257949821558]
Referring Remote Sensing Image (RRSIS) is a new challenge that combines computer vision and natural language processing.
Traditional Referring Image (RIS) approaches have been impeded by the complex spatial scales and orientations found in aerial imagery.
We introduce the Rotated Multi-Scale Interaction Network (RMSIN), an innovative approach designed for the unique demands of RRSIS.
arXiv Detail & Related papers (2023-12-19T08:14:14Z) - CDKT-FL: Cross-Device Knowledge Transfer using Proxy Dataset in Federated Learning [27.84845136697669]
We develop a novel knowledge distillation-based approach to study the extent of knowledge transfer between the global model and local models.
We show the proposed method achieves significant speedups and high personalized performance of local models.
arXiv Detail & Related papers (2022-04-04T14:49:19Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.