Privacy Threats Analysis to Secure Federated Learning
- URL: http://arxiv.org/abs/2106.13076v1
- Date: Thu, 24 Jun 2021 15:02:54 GMT
- Title: Privacy Threats Analysis to Secure Federated Learning
- Authors: Yuchen Li, Yifan Bao, Liyao Xiang, Junhan Liu, Cen Chen, Li Wang,
Xinbing Wang
- Abstract summary: We analyze the privacy threats in industrial-level federated learning frameworks with secure computation.
We show through theoretical analysis that it is possible for the attacker to invert the entire private input of the victim.
- Score: 34.679990191199224
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is emerging as a machine learning technique that trains a
model across multiple decentralized parties. It is renowned for preserving
privacy as the data never leaves the computational devices, and recent
approaches further enhance its privacy by hiding messages transferred in
encryption. However, we found that despite the efforts, federated learning
remains privacy-threatening, due to its interactive nature across different
parties. In this paper, we analyze the privacy threats in industrial-level
federated learning frameworks with secure computation, and reveal such threats
widely exist in typical machine learning models such as linear regression,
logistic regression and decision tree. For the linear and logistic regression,
we show through theoretical analysis that it is possible for the attacker to
invert the entire private input of the victim, given very few information. For
the decision tree model, we launch an attack to infer the range of victim's
private inputs. All attacks are evaluated on popular federated learning
frameworks and real-world datasets.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Security and Privacy Issues and Solutions in Federated Learning for
Digital Healthcare [0.0]
We present vulnerabilities, attacks, and defenses based on the widened attack surfaces of Federated Learning.
We suggest promising new research directions toward a more robust FL.
arXiv Detail & Related papers (2024-01-16T16:07:53Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - White-box Inference Attacks against Centralized Machine Learning and
Federated Learning [0.0]
We evaluate the impact of different neural network layers, gradient, gradient norm, and fine-tuned models on member inference attack performance with prior knowledge.
The results show that the centralized machine learning model shows more serious member information leakage in all aspects.
arXiv Detail & Related papers (2022-12-15T07:07:19Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - MORSE-STF: A Privacy Preserving Computation System [12.875477499515158]
We present Secure-TF, a privacy-preserving machine learning framework based on MPC.
Our framework is able to support widely-used machine learning models such as logistic regression, fully-connected neural network, and convolutional neural network.
arXiv Detail & Related papers (2021-09-24T03:42:46Z) - GRNN: Generative Regression Neural Network -- A Data Leakage Attack for
Federated Learning [3.050919759387984]
We show that image-based privacy data can be easily recovered in full from the shared gradient only via our proposed Generative Regression Neural Network (GRNN)
We evaluate our method on several image classification tasks. The results illustrate that our proposed GRNN outperforms state-of-the-art methods with better stability, stronger, and higher accuracy.
arXiv Detail & Related papers (2021-05-02T18:39:37Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.