IPBA: Imperceptible Perturbation Backdoor Attack in Federated Self-Supervised Learning
- URL: http://arxiv.org/abs/2508.08031v1
- Date: Mon, 11 Aug 2025 14:36:11 GMT
- Title: IPBA: Imperceptible Perturbation Backdoor Attack in Federated Self-Supervised Learning
- Authors: Jiayao Wang, Yang Song, Zhendong Zhao, Jiale Zhang, Qilin Wu, Junwu Zhu, Dongfang Zhao,
- Abstract summary: Federated self-supervised learning (FSSL) combines the advantages of decentralized modeling and unlabeled representation learning.<n>Research indicates that FSSL remains vulnerable to backdoor attacks.<n>We propose an imperceptible and effective backdoor attack method against FSSL, called IPBA.
- Score: 13.337697403537488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated self-supervised learning (FSSL) combines the advantages of decentralized modeling and unlabeled representation learning, serving as a cutting-edge paradigm with strong potential for scalability and privacy preservation. Although FSSL has garnered increasing attention, research indicates that it remains vulnerable to backdoor attacks. Existing methods generally rely on visually obvious triggers, which makes it difficult to meet the requirements for stealth and practicality in real-world deployment. In this paper, we propose an imperceptible and effective backdoor attack method against FSSL, called IPBA. Our empirical study reveals that existing imperceptible triggers face a series of challenges in FSSL, particularly limited transferability, feature entanglement with augmented samples, and out-of-distribution properties. These issues collectively undermine the effectiveness and stealthiness of traditional backdoor attacks in FSSL. To overcome these challenges, IPBA decouples the feature distributions of backdoor and augmented samples, and introduces Sliced-Wasserstein distance to mitigate the out-of-distribution properties of backdoor samples, thereby optimizing the trigger generation process. Our experimental results on several FSSL scenarios and datasets show that IPBA significantly outperforms existing backdoor attack methods in performance and exhibits strong robustness under various defense mechanisms.
Related papers
- ADCA: Attention-Driven Multi-Party Collusion Attack in Federated Self-Supervised Learning [9.410118086518992]
Federated Self-Supervised Learning (FSSL) integrates privacy advantages of distributed training with the capability of self-supervised learning.<n>Recent studies have shown that FSSL is also vulnerable to backdoor attacks.<n>We propose the Attention-Driven multi-party Collusion Attack (ADCA)
arXiv Detail & Related papers (2026-02-05T12:49:36Z) - HPE: Hallucinated Positive Entanglement for Backdoor Attacks in Federated Self-Supervised Learning [11.615563669883072]
Federated self-supervised learning (FSSL) enables collaborative training of self-supervised representation models without sharing raw unlabeled data.<n>While it serves as a crucial paradigm for privacy-preserving learning, its security remains vulnerable to backdoor attacks.<n>We propose a new backdoor attack method for FSSL, namely Hallucinated Positive Entanglement (HPE)
arXiv Detail & Related papers (2026-02-02T14:24:06Z) - FORCE: Transferable Visual Jailbreaking Attacks via Feature Over-Reliance CorrEction [82.6826848085638]
Visual jailbreaking attacks can manipulate open-source MLLMs more readily than sophisticated textual attacks.<n>These attacks exhibit extremely limited cross-model transferability, failing to reliably identify vulnerabilities in closed-source MLLMs.<n>We propose a Feature Over-Reliance CorrEction (FORCE) method, which guides the attack to explore broader feasible regions.
arXiv Detail & Related papers (2025-09-25T11:36:56Z) - ICLShield: Exploring and Mitigating In-Context Learning Backdoor Attacks [61.06621533874629]
In-context learning (ICL) has demonstrated remarkable success in large language models (LLMs)<n>In this paper, we propose, for the first time, the dual-learning hypothesis, which posits that LLMs simultaneously learn both the task-relevant latent concepts and backdoor latent concepts.<n>Motivated by these findings, we propose ICLShield, a defense mechanism that dynamically adjusts the concept preference ratio.
arXiv Detail & Related papers (2025-07-02T03:09:20Z) - SPA: Towards More Stealth and Persistent Backdoor Attacks in Federated Learning [10.924427077035915]
Federated Learning (FL) has emerged as a leading paradigm for privacy-preserving distributed machine learning, yet the distributed nature of FL introduces unique security challenges.<n>We propose a novel and stealthy backdoor attack framework, named SPA, which departs from traditional approaches by leveraging feature-space alignment.<n>Our results call urgent attention to the evolving sophistication of backdoor threats in FL and emphasize the pressing need for advanced, feature-level defense techniques.
arXiv Detail & Related papers (2025-06-26T01:33:14Z) - Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning [1.9632700283749582]
This paper introduces a novel defense mechanism against backdoor attacks in federated learning, named GANcrop.
Experimental findings demonstrate that GANcrop effectively safeguards against backdoor attacks, particularly in non-IID scenarios.
arXiv Detail & Related papers (2024-05-31T09:33:16Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Let's Focus: Focused Backdoor Attack against Federated Transfer Learning [12.68970864847173]
Federated Transfer Learning (FTL) is the most general variation of Federated Learning.
In this paper, we investigate this intriguing Federated Learning scenario to identify and exploit a vulnerability.
The proposed attack can be carried out by one of the clients during the Federated Learning phase of FTL.
arXiv Detail & Related papers (2024-04-30T10:11:44Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.