Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats
- URL: http://arxiv.org/abs/2401.10375v2
- Date: Tue, 2 Apr 2024 01:31:24 GMT
- Title: Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats
- Authors: Chen Wu, Xi Li, Jiaqi Wang,
- Abstract summary: Federated Learning (FL) addresses critical issues in machine learning related to data privacy and security, yet suffering from data insufficiency and imbalance under certain circumstances.
The emergence of foundation models (FMs) offers potential solutions to the limitations of existing FL frameworks.
We conduct the first investigation on the vulnerability of FM integrated FL (FM-FL) under adversarial threats.
- Score: 34.51922824730864
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) addresses critical issues in machine learning related to data privacy and security, yet suffering from data insufficiency and imbalance under certain circumstances. The emergence of foundation models (FMs) offers potential solutions to the limitations of existing FL frameworks, e.g., by generating synthetic data for model initialization. However, due to the inherent safety concerns of FMs, integrating FMs into FL could introduce new risks, which remains largely unexplored. To address this gap, we conduct the first investigation on the vulnerability of FM integrated FL (FM-FL) under adversarial threats. Based on a unified framework of FM-FL, we introduce a novel attack strategy that exploits safety issues of FM to compromise FL client models. Through extensive experiments with well-known models and benchmark datasets in both image and text domains, we reveal the high susceptibility of the FM-FL to this new threat under various FL configurations. Furthermore, we find that existing FL defense strategies offer limited protection against this novel attack approach. This research highlights the critical need for enhanced security measures in FL in the era of FMs.
Related papers
- Formal Logic-guided Robust Federated Learning against Poisoning Attacks [6.997975378492098]
Federated Learning (FL) offers a promising solution to the privacy concerns associated with centralized Machine Learning (ML)
FL is vulnerable to various security threats, including poisoning attacks, where adversarial clients manipulate the training data or model updates to degrade overall model performance.
We present a defense mechanism designed to mitigate poisoning attacks in federated learning for time-series tasks.
arXiv Detail & Related papers (2024-11-05T16:23:19Z) - Trustworthy Federated Learning: Privacy, Security, and Beyond [37.495790989584584]
Federated Learning (FL) addresses concerns by facilitating collaborative model training across distributed data sources without transferring raw data.
We conduct an extensive survey of the security and privacy issues prevalent in FL, underscoring the vulnerability of communication links and the potential for cyber threats.
We identify the intricate security challenges that arise within the FL frameworks, aiming to contribute to the development of secure and efficient FL systems.
arXiv Detail & Related papers (2024-11-03T14:18:01Z) - FedPFT: Federated Proxy Fine-Tuning of Foundation Models [55.58899993272904]
Adapting Foundation Models (FMs) for downstream tasks through Federated Learning (FL) emerges as a promising strategy for protecting data privacy and valuable FMs.
Existing methods fine-tune FM by allocating sub-FM to clients in FL, leading to suboptimal performance due to insufficient tuning and inevitable error accumulations of gradients.
We propose Federated Proxy Fine-Tuning (FedPFT), a novel method enhancing FMs adaptation in downstream tasks through FL by two key modules.
arXiv Detail & Related papers (2024-04-17T16:30:06Z) - Robust Federated Learning for Wireless Networks: A Demonstration with Channel Estimation [6.402721982801266]
Federated learning (FL) offers a privacy-preserving collaborative approach for training models in wireless networks.
Despite extensive studies on FL-empowered channel estimation, the security concerns associated with FL require meticulous attention.
In this paper, we analyze such vulnerabilities, corresponding solutions were brought forth, and validated through simulation.
arXiv Detail & Related papers (2024-04-03T22:03:28Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - The Role of Federated Learning in a Wireless World with Foundation Models [59.8129893837421]
Foundation models (FMs) are general-purpose artificial intelligence (AI) models that have recently enabled multiple brand-new generative AI applications.
Currently, the exploration of the interplay between FMs and federated learning (FL) is still in its nascent stage.
This article explores the extent to which FMs are suitable for FL over wireless networks, including a broad overview of research challenges and opportunities.
arXiv Detail & Related papers (2023-10-06T04:13:10Z) - Towards Understanding Adversarial Transferability in Federated Learning [14.417827137513369]
A group of malicious clients has impacted the model during training by disguising their identities and acting as benign clients but later switching to an adversarial role.
This type of attack is subtle and hard to detect because these clients initially appear to be benign.
We empirically show that the proposed attack imposes a high security risk to current FL systems.
arXiv Detail & Related papers (2023-10-01T08:35:46Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
and Future Research Directions [3.6086478979425998]
Federated learning (FL) is a machine learning (ML) approach that allows the use of distributed data without compromising personal privacy.
The heterogeneous distribution of data among clients in FL can make it difficult for the orchestration server to validate the integrity of local model updates.
Backdoor attacks involve the insertion of malicious functionality into a targeted model through poisoned updates from malicious clients.
arXiv Detail & Related papers (2023-03-03T20:54:28Z) - WW-FL: Secure and Private Large-Scale Federated Learning [15.412475066687723]
Federated learning (FL) is an efficient approach for large-scale distributed machine learning that promises data privacy by keeping training data on client devices.
Recent research has uncovered vulnerabilities in FL, impacting both security and privacy through poisoning attacks.
We propose WW-FL, an innovative framework that combines secure multi-party computation with hierarchical FL to guarantee data and global model privacy.
arXiv Detail & Related papers (2023-02-20T11:02:55Z) - OLIVE: Oblivious Federated Learning on Trusted Execution Environment
against the risk of sparsification [22.579050671255846]
This study focuses on the analysis of the vulnerabilities of server-side TEEs in Federated Learning and the defense.
First, we theoretically analyze the leakage of memory access patterns, revealing the risk of sparsified gradients.
Second, we devise an inference attack to link memory access patterns to sensitive information in the training dataset.
arXiv Detail & Related papers (2022-02-15T03:23:57Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Threats to Federated Learning: A Survey [35.724483191921244]
Federated learning (FL) has emerged as a promising solution under this new reality.
Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries.
This paper provides a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL.
arXiv Detail & Related papers (2020-03-04T15:30:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.