Scale-MIA: A Scalable Model Inversion Attack against Secure Federated
Learning via Latent Space Reconstruction
- URL: http://arxiv.org/abs/2311.05808v2
- Date: Tue, 14 Nov 2023 16:33:21 GMT
- Title: Scale-MIA: A Scalable Model Inversion Attack against Secure Federated
Learning via Latent Space Reconstruction
- Authors: Shanghao Shi, Ning Wang, Yang Xiao, Chaoyu Zhang, Yi Shi, Y.Thomas
Hou, Wenjing Lou
- Abstract summary: Federated learning is known for its capability to safeguard participants' data privacy.
Recently emerged model inversion attacks (MIAs) have shown that a malicious parameter server can reconstruct individual users' local data samples through model updates.
We propose Scale-MIA, a novel MIA capable of efficiently and accurately recovering training samples of clients from the aggregated updates.
- Score: 26.9559481641707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is known for its capability to safeguard participants'
data privacy. However, recently emerged model inversion attacks (MIAs) have
shown that a malicious parameter server can reconstruct individual users' local
data samples through model updates. The state-of-the-art attacks either rely on
computation-intensive search-based optimization processes to recover each input
batch, making scaling difficult, or they involve the malicious parameter server
adding extra modules before the global model architecture, rendering the
attacks too conspicuous and easily detectable.
To overcome these limitations, we propose Scale-MIA, a novel MIA capable of
efficiently and accurately recovering training samples of clients from the
aggregated updates, even when the system is under the protection of a robust
secure aggregation protocol. Unlike existing approaches treating models as
black boxes, Scale-MIA recognizes the importance of the intricate architecture
and inner workings of machine learning models. It identifies the latent space
as the critical layer for breaching privacy and decomposes the complex recovery
task into an innovative two-step process to reduce computation complexity. The
first step involves reconstructing the latent space representations (LSRs) from
the aggregated model updates using a closed-form inversion mechanism,
leveraging specially crafted adversarial linear layers. In the second step, the
whole input batches are recovered from the LSRs by feeding them into a
fine-tuned generative decoder.
We implemented Scale-MIA on multiple commonly used machine learning models
and conducted comprehensive experiments across various settings. The results
demonstrate that Scale-MIA achieves excellent recovery performance on different
datasets, exhibiting high reconstruction rates, accuracy, and attack efficiency
on a larger scale compared to state-of-the-art MIAs.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Any Image Restoration with Efficient Automatic Degradation Adaptation [132.81912195537433]
We propose a unified manner to achieve joint embedding by leveraging the inherent similarities across various degradations for efficient and comprehensive restoration.
Our network sets new SOTA records while reducing model complexity by approximately -82% in trainable parameters and -85% in FLOPs.
arXiv Detail & Related papers (2024-07-18T10:26:53Z) - Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - MisGUIDE : Defense Against Data-Free Deep Learning Model Extraction [0.8437187555622164]
"MisGUIDE" is a two-step defense framework for Deep Learning models that disrupts the adversarial sample generation process.
The aim of the proposed defense method is to reduce the accuracy of the cloned model while maintaining accuracy on authentic queries.
arXiv Detail & Related papers (2024-03-27T13:59:21Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Approximate and Weighted Data Reconstruction Attack in Federated Learning [1.802525429431034]
distributed learning (FL) enables clients to collaborate on building a machine learning model without sharing their private data.
Recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL.
We propose an approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes.
arXiv Detail & Related papers (2023-08-13T17:40:56Z) - Reconstruction-based LSTM-Autoencoder for Anomaly-based DDoS Attack
Detection over Multivariate Time-Series Data [6.642599588462097]
A Distributed Denial-of-service (DDoS) attack is a malicious attempt to disrupt the regular traffic of a targeted server, service, or network by sending a flood of traffic to overwhelm the target or its surrounding infrastructure.
Traditional statistical and shallow machine learning techniques can detect superficial anomalies based on shallow data and feature selection, however, these approaches cannot detect unseen DDoS attacks.
We propose a reconstruction-based anomaly detection model named LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for detecting DDoS attack anomalies.
arXiv Detail & Related papers (2023-04-21T03:56:03Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - Can recurrent neural networks learn process model structure? [0.2580765958706854]
We introduce an evaluation framework that combines variant-based resampling and custom metrics for fitness, precision and generalization.
We confirm that LSTMs can struggle to learn process model structure, even with simplistic process data.
We also found that decreasing the amount of information seen by the LSTM during training, causes a sharp drop in generalization and precision scores.
arXiv Detail & Related papers (2022-12-13T08:40:01Z) - Separation of Powers in Federated Learning [5.966064140042439]
Federated Learning (FL) enables collaborative training among mutually distrusting parties.
Recent attacks have reconstructed large fractions of training data from ostensibly "sanitized" model updates.
We introduce TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture.
arXiv Detail & Related papers (2021-05-19T21:00:44Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.