PPIDSG: A Privacy-Preserving Image Distribution Sharing Scheme with GAN
in Federated Learning
- URL: http://arxiv.org/abs/2312.10380v1
- Date: Sat, 16 Dec 2023 08:32:29 GMT
- Title: PPIDSG: A Privacy-Preserving Image Distribution Sharing Scheme with GAN
in Federated Learning
- Authors: Yuting Ma, Yuanzhi Yao, Xiaohua Xu
- Abstract summary: Federated learning (FL) has attracted growing attention since it allows for privacy-preserving collaborative training on decentralized clients.
Recent works have revealed that it still has the risk of exposing private data to adversaries.
We propose a privacy-preserving image distribution sharing scheme with GAN (PPIDSG)
- Score: 2.0507547735926424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has attracted growing attention since it allows for
privacy-preserving collaborative training on decentralized clients without
explicitly uploading sensitive data to the central server. However, recent
works have revealed that it still has the risk of exposing private data to
adversaries. In this paper, we conduct reconstruction attacks and enhance
inference attacks on various datasets to better understand that sharing trained
classification model parameters to a central server is the main problem of
privacy leakage in FL. To tackle this problem, a privacy-preserving image
distribution sharing scheme with GAN (PPIDSG) is proposed, which consists of a
block scrambling-based encryption algorithm, an image distribution sharing
method, and local classification training. Specifically, our method can capture
the distribution of a target image domain which is transformed by the block
encryption algorithm, and upload generator parameters to avoid classifier
sharing with negligible influence on model performance. Furthermore, we apply a
feature extractor to motivate model utility and train it separately from the
classifier. The extensive experimental results and security analyses
demonstrate the superiority of our proposed scheme compared to other
state-of-the-art defense methods. The code is available at
https://github.com/ytingma/PPIDSG.
Related papers
- Federated Learning Nodes Can Reconstruct Peers' Image Data [27.92271597111756]
Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data.
Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server.
We show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk.
arXiv Detail & Related papers (2024-10-07T00:18:35Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Share Your Representation Only: Guaranteed Improvement of the
Privacy-Utility Tradeoff in Federated Learning [47.042811490685324]
Mitigating the risk of this information leakage, using state of the art differentially private algorithms, also does not come for free.
In this paper, we consider a representation learning objective that various parties collaboratively refine on a federated model, with differential privacy guarantees.
We observe a significant performance improvement over the prior work under the same small privacy budget.
arXiv Detail & Related papers (2023-09-11T14:46:55Z) - FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users [19.209830150036254]
federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm.
Next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server.
This paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme.
arXiv Detail & Related papers (2023-06-08T11:20:00Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - Privacy-Preserving Federated Learning via System Immersion and Random
Matrix Encryption [4.258856853258348]
Federated learning (FL) has emerged as a privacy solution for collaborative distributed learning where clients train AI models directly on their devices instead of sharing their data with a centralized (potentially adversarial) server.
We propose a Privacy-Preserving Federated Learning (PPFL) framework built on the synergy of matrix encryption and system immersion tools from control theory.
We show that our algorithm provides the same level of accuracy and convergence rate as the standard FL with a negligible cost while revealing no information about clients' data.
arXiv Detail & Related papers (2022-04-05T21:28:59Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.