Noise-Aware and Dynamically Adaptive Federated Defense Framework for SAR Image Target Recognition
- URL: http://arxiv.org/abs/2601.00900v1
- Date: Wed, 31 Dec 2025 17:24:15 GMT
- Title: Noise-Aware and Dynamically Adaptive Federated Defense Framework for SAR Image Target Recognition
- Authors: Yuchao Hou, Zixuan Zhang, Jie Wang, Wenke Huang, Lianhui Liang, Di Wu, Zhiquan Liu, Youliang Tian, Jianming Zhu, Jisheng Dang, Junhao Dong, Zhongliang Guo,
- Abstract summary: Federated learning (FL) provides an emerging computational intelligence paradigm for SAR image target recognition.<n>FL confronts critical security risks, where malicious clients can exploit SAR's multiplicative speckle noise to conceal backdoor triggers.<n>We propose NADAFD, a noise-aware and dynamically adaptive federated defense framework that integrates frequency-domain, spatial-domain, and client-behavior analyses.
- Score: 41.71964972109509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a critical application of computational intelligence in remote sensing, deep learning-based synthetic aperture radar (SAR) image target recognition facilitates intelligent perception but typically relies on centralized training, where multi-source SAR data are uploaded to a single server, raising privacy and security concerns. Federated learning (FL) provides an emerging computational intelligence paradigm for SAR image target recognition, enabling cross-site collaboration while preserving local data privacy. However, FL confronts critical security risks, where malicious clients can exploit SAR's multiplicative speckle noise to conceal backdoor triggers, severely challenging the robustness of the computational intelligence model. To address this challenge, we propose NADAFD, a noise-aware and dynamically adaptive federated defense framework that integrates frequency-domain, spatial-domain, and client-behavior analyses to counter SAR-specific backdoor threats. Specifically, we introduce a frequency-domain collaborative inversion mechanism to expose cross-client spectral inconsistencies indicative of hidden backdoor triggers. We further design a noise-aware adversarial training strategy that embeds $Γ$-distributed speckle characteristics into mask-guided adversarial sample generation to enhance robustness against both backdoor attacks and SAR speckle noise. In addition, we present a dynamic health assessment module that tracks client update behaviors across training rounds and adaptively adjusts aggregation weights to mitigate evolving malicious contributions. Experiments on MSTAR and OpenSARShip datasets demonstrate that NADAFD achieves higher accuracy on clean test samples and a lower backdoor attack success rate on triggered inputs than existing federated backdoor defenses for SAR target recognition.
Related papers
- Multi-Agent-Driven Cognitive Secure Communications in Satellite-Terrestrial Networks [58.70163955407538]
Malicious eavesdroppers pose a serious threat to private information via satellite-terrestrial networks (STNs)<n>We propose a cognitive secure communication framework driven by multiple agents that coordinates spectrum scheduling and protection through real-time sensing.<n>We exploit generative adversarial networks to produce adversarial matrices, and employ learning-aided power control to set real and adversarial signal powers for protection layer.
arXiv Detail & Related papers (2026-01-06T10:30:41Z) - FAROS: Robust Federated Learning with Adaptive Scaling against Backdoor Attacks [9.466036066320946]
backdoor attacks pose a significant threat to Federated Learning (FL)<n>We propose FAROS, an enhanced FL framework that incorporates Adaptive Differential Scaling (ADS) and Robust Core-set Computing (RCC)<n>RCC effectively mitigates the risk of single-point failure by computing the centroid of a core set comprising clients with the highest confidence.
arXiv Detail & Related papers (2026-01-05T06:55:35Z) - Towards Effective, Stealthy, and Persistent Backdoor Attacks Targeting Graph Foundation Models [62.87838888016534]
Graph Foundation Models (GFMs) are pre-trained on diverse source domains and adapted to unseen targets.<n>Backdoor attacks against GFMs are non-trivial due to three key challenges.<n>We propose GFM-BA, a novel Backdoor Attack model against Graph Foundation Models.
arXiv Detail & Related papers (2025-11-22T08:52:09Z) - Your RAG is Unfair: Exposing Fairness Vulnerabilities in Retrieval-Augmented Generation via Backdoor Attacks [13.32144267469022]
Retrieval-augmented generation (RAG) enhances factual grounding by integrating retrieval mechanisms with generative models.<n>This paper introduces BiasRAG, a systematic framework that exposes fairness vulnerabilities in RAG through a two-phase backdoor attack.<n>In the pre-training phase, the query encoder is compromised to align the target group with the intended social bias, ensuring long-term persistence.<n>In the post-deployment phase, adversarial documents are injected into knowledge bases to reinforce the backdoor.
arXiv Detail & Related papers (2025-09-26T15:33:36Z) - SRD: Reinforcement-Learned Semantic Perturbation for Backdoor Defense in VLMs [57.880467106470775]
Attackers can inject imperceptible perturbations into the training data, causing the model to generate malicious, attacker-controlled captions.<n>We propose Semantic Reward Defense (SRD), a reinforcement learning framework that mitigates backdoor behavior without prior knowledge of triggers.<n>SRD uses a Deep Q-Network to learn policies for applying discrete perturbations to sensitive image regions, aiming to disrupt the activation of malicious pathways.
arXiv Detail & Related papers (2025-06-05T08:22:24Z) - Defending the Edge: Representative-Attention for Mitigating Backdoor Attacks in Federated Learning [7.808916974942399]
heterogeneous edge devices produce diverse, non-independent, and identically distributed (non-IID) data.<n>We propose a novel representative-attention-based defense mechanism, named FeRA, to distinguish benign from malicious clients.<n>Our evaluation demonstrates FeRA's robustness across various FL scenarios, including challenging non-IID data distributions typical of edge devices.
arXiv Detail & Related papers (2025-05-15T13:44:32Z) - AdvSwap: Covert Adversarial Perturbation with High Frequency Info-swapping for Autonomous Driving Perception [14.326474757036925]
This paper introduces a novel adversarial attack method, AdvSwap, which creatively utilizes wavelet-based high-frequency information swapping.<n>The scheme effectively removes the original label data and incorporates the guidance image data, producing concealed and robust adversarial samples.<n>The generates adversarial samples are also difficult to perceive by humans and algorithms.
arXiv Detail & Related papers (2025-02-12T13:05:35Z) - Let the Noise Speak: Harnessing Noise for a Unified Defense Against Adversarial and Backdoor Attacks [31.291700348439175]
Malicious data manipulation attacks against machine learning jeopardize its reliability in safety-critical applications.<n>We propose NoiSec, a reconstruction-based intrusion detection system.<n>NoiSec disentangles the noise from the test input, extracts the underlying features from the noise, and leverages them to recognize systematic malicious manipulation.
arXiv Detail & Related papers (2024-06-18T21:44:51Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.