A Generative Federated Learning Framework for Differential Privacy
- URL: http://arxiv.org/abs/2109.12062v1
- Date: Fri, 24 Sep 2021 16:36:19 GMT
- Title: A Generative Federated Learning Framework for Differential Privacy
- Authors: Eugenio Lomurno, Leonardo Di Perna, Lorenzo Cazzella, Stefano Samele,
Matteo Matteucci
- Abstract summary: We present the 3DGL framework, an alternative to the current federated learning paradigms.
Its goal is to share generative models with high levels of $varepsilon$-differential privacy.
In addition, we propose DDP-$beta$VAE, a deep generative model capable of generating synthetic data with high levels of utility and safety for the individual.
- Score: 7.50722199393581
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In machine learning, differential privacy and federated learning concepts are
gaining more and more importance in an increasingly interconnected world. While
the former refers to the sharing of private data characterized by strict
security rules to protect individual privacy, the latter refers to distributed
learning techniques in which a central server exchanges information with
different clients for machine learning purposes. In recent years, many studies
have shown the possibility of bypassing the privacy shields of these systems
and exploiting the vulnerabilities of machine learning models, making them leak
the information with which they have been trained. In this work, we present the
3DGL framework, an alternative to the current federated learning paradigms. Its
goal is to share generative models with high levels of
$\varepsilon$-differential privacy. In addition, we propose DDP-$\beta$VAE, a
deep generative model capable of generating synthetic data with high levels of
utility and safety for the individual. We evaluate the 3DGL framework based on
DDP-$\beta$VAE, showing how the overall system is resilient to the principal
attacks in federated learning and improves the performance of distributed
learning algorithms.
Related papers
- Privacy Drift: Evolving Privacy Concerns in Incremental Learning [4.275908952997288]
This study aims to unveil the nuanced relationship between the evolution of model performance and the integrity of data privacy.
Our results highlight a complex interplay between model accuracy and privacy safeguards, revealing that enhancements in model performance can lead to increased privacy risks.
This work lays the groundwork for future research on privacy-aware machine learning, aiming to achieve a delicate balance between model accuracy and data privacy in decentralized environments.
arXiv Detail & Related papers (2024-12-06T17:04:09Z) - Federated Learning Empowered by Generative Content [55.576885852501775]
Federated learning (FL) enables leveraging distributed private data for model training in a privacy-preserving way.
We propose a novel FL framework termed FedGC, designed to mitigate data heterogeneity issues by diversifying private data with generative content.
We conduct a systematic empirical study on FedGC, covering diverse baselines, datasets, scenarios, and modalities.
arXiv Detail & Related papers (2023-12-10T07:38:56Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Privacy-Preserving Wavelet Wavelet Neural Network with Fully Homomorphic
Encryption [5.010425616264462]
Privacy-Preserving Machine Learning (PPML) aims to protect the privacy and provide security to the data used in building Machine Learning models.
We propose a fully homomorphic encrypted wavelet neural network to protect privacy and at the same time not compromise on the efficiency of the model.
arXiv Detail & Related papers (2022-05-26T10:40:31Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy [20.95527613004989]
Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data.
Previous works do not give a practical solution due to three issues.
Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models.
arXiv Detail & Related papers (2020-07-31T01:08:57Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.