Scale-MIA: A Scalable Model Inversion Attack against Secure Federated
Learning via Latent Space Reconstruction
- URL: http://arxiv.org/abs/2311.05808v2
- Date: Tue, 14 Nov 2023 16:33:21 GMT
- Title: Scale-MIA: A Scalable Model Inversion Attack against Secure Federated
Learning via Latent Space Reconstruction
- Authors: Shanghao Shi, Ning Wang, Yang Xiao, Chaoyu Zhang, Yi Shi, Y.Thomas
Hou, Wenjing Lou
- Abstract summary: Federated learning is known for its capability to safeguard participants' data privacy.
Recently emerged model inversion attacks (MIAs) have shown that a malicious parameter server can reconstruct individual users' local data samples through model updates.
We propose Scale-MIA, a novel MIA capable of efficiently and accurately recovering training samples of clients from the aggregated updates.
- Score: 26.9559481641707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is known for its capability to safeguard participants'
data privacy. However, recently emerged model inversion attacks (MIAs) have
shown that a malicious parameter server can reconstruct individual users' local
data samples through model updates. The state-of-the-art attacks either rely on
computation-intensive search-based optimization processes to recover each input
batch, making scaling difficult, or they involve the malicious parameter server
adding extra modules before the global model architecture, rendering the
attacks too conspicuous and easily detectable.
To overcome these limitations, we propose Scale-MIA, a novel MIA capable of
efficiently and accurately recovering training samples of clients from the
aggregated updates, even when the system is under the protection of a robust
secure aggregation protocol. Unlike existing approaches treating models as
black boxes, Scale-MIA recognizes the importance of the intricate architecture
and inner workings of machine learning models. It identifies the latent space
as the critical layer for breaching privacy and decomposes the complex recovery
task into an innovative two-step process to reduce computation complexity. The
first step involves reconstructing the latent space representations (LSRs) from
the aggregated model updates using a closed-form inversion mechanism,
leveraging specially crafted adversarial linear layers. In the second step, the
whole input batches are recovered from the LSRs by feeding them into a
fine-tuned generative decoder.
We implemented Scale-MIA on multiple commonly used machine learning models
and conducted comprehensive experiments across various settings. The results
demonstrate that Scale-MIA achieves excellent recovery performance on different
datasets, exhibiting high reconstruction rates, accuracy, and attack efficiency
on a larger scale compared to state-of-the-art MIAs.
Related papers
- IncSAR: A Dual Fusion Incremental Learning Framework for SAR Target Recognition [13.783950035836593]
IncSAR is an incremental learning framework designed to tackle catastrophic forgetting in target recognition.
To mitigate the speckle noise inherent in SAR images, we employ a denoising module based on a neural network approximation.
Experiments on the MSTAR, SAR-AIRcraft-1.0, and OpenSARShip benchmark datasets demonstrate that IncSAR significantly outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-08T08:49:47Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Restore Anything Model via Efficient Degradation Adaptation [129.38475243424563]
RAM takes a unified path that leverages inherent similarities across various degradations to enable efficient and comprehensive restoration.
RAM's SOTA performance confirms RAM's SOTA performance, reducing model complexity by approximately 82% in trainable parameters and 85% in FLOPs.
arXiv Detail & Related papers (2024-07-18T10:26:53Z) - Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Approximate and Weighted Data Reconstruction Attack in Federated Learning [1.802525429431034]
distributed learning (FL) enables clients to collaborate on building a machine learning model without sharing their private data.
Recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL.
We propose an approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes.
arXiv Detail & Related papers (2023-08-13T17:40:56Z) - Reconstruction-based LSTM-Autoencoder for Anomaly-based DDoS Attack
Detection over Multivariate Time-Series Data [6.642599588462097]
A Distributed Denial-of-service (DDoS) attack is a malicious attempt to disrupt the regular traffic of a targeted server, service, or network by sending a flood of traffic to overwhelm the target or its surrounding infrastructure.
Traditional statistical and shallow machine learning techniques can detect superficial anomalies based on shallow data and feature selection, however, these approaches cannot detect unseen DDoS attacks.
We propose a reconstruction-based anomaly detection model named LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for detecting DDoS attack anomalies.
arXiv Detail & Related papers (2023-04-21T03:56:03Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - Reconstructing Training Data with Informed Adversaries [30.138217209991826]
Given access to a machine learning model, can an adversary reconstruct the model's training data?
This work studies this question from the lens of a powerful informed adversary who knows all the training data points except one.
We show it is feasible to reconstruct the remaining data point in this stringent threat model.
arXiv Detail & Related papers (2022-01-13T09:19:25Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.