CSVAR: Enhancing Visual Privacy in Federated Learning via Adaptive Shuffling Against Overfitting
- URL: http://arxiv.org/abs/2506.01425v1
- Date: Mon, 02 Jun 2025 08:30:12 GMT
- Title: CSVAR: Enhancing Visual Privacy in Federated Learning via Adaptive Shuffling Against Overfitting
- Authors: Zhuo Chen, Zhenya Ma, Yan Zhang, Donghua Cai, Ye Zhang, Qiushi Li, Yongheng Deng, Ye Guo, Ju Ren, Xuemin, Shen,
- Abstract summary: CSVAR is an image shuffling framework to generate obfuscated images for secure data transmission and each training epoch.<n>It addresses both overfitting-induced privacy leaks and raw image transmission risks.<n>It is capable of generating visually obfuscated images that exhibit high perceptual ambiguity to human eyes.
- Score: 27.262703669563155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although federated learning preserves training data within local privacy domains, the aggregated model parameters may still reveal private characteristics. This vulnerability stems from clients' limited training data, which predisposes models to overfitting. Such overfitting enables models to memorize distinctive patterns from training samples, thereby amplifying the success probability of privacy attacks like membership inference. To enhance visual privacy protection in FL, we present CSVAR(Channel-Wise Spatial Image Shuffling with Variance-Guided Adaptive Region Partitioning), a novel image shuffling framework to generate obfuscated images for secure data transmission and each training epoch, addressing both overfitting-induced privacy leaks and raw image transmission risks. CSVAR adopts region-variance as the metric to measure visual privacy sensitivity across image regions. Guided by this, CSVAR adaptively partitions each region into multiple blocks, applying fine-grained partitioning to privacy-sensitive regions with high region-variances for enhancing visual privacy protection and coarse-grained partitioning to privacy-insensitive regions for balancing model utility. In each region, CSVAR then shuffles between blocks in both the spatial domains and chromatic channels to hide visual spatial features and disrupt color distribution. Experimental evaluations conducted on diverse real-world datasets demonstrate that CSVAR is capable of generating visually obfuscated images that exhibit high perceptual ambiguity to human eyes, simultaneously mitigating the effectiveness of adversarial data reconstruction attacks and achieving a good trade-off between visual privacy protection and model utility.
Related papers
- Machine Learning with Privacy for Protected Attributes [56.44253915927481]
We refine the definition of differential privacy (DP) to create a more general and flexible framework that we call feature differential privacy (FDP)<n>Our definition is simulation-based and allows for both addition/removal and replacement variants of privacy, and can handle arbitrary separation of protected and non-protected features.<n>We apply our framework to various machine learning tasks and show that it can significantly improve the utility of DP-trained models when public features are available.
arXiv Detail & Related papers (2025-06-24T17:53:28Z) - Shadow defense against gradient inversion attack in federated learning [6.708306069847227]
Federated learning (FL) has emerged as a transformative framework for privacy-preserving distributed training.<n> Gradient attacks (GIAs) allow adversaries to approximate the gradients used for training and reconstruct training images, thus stealing patient privacy.<n>We introduce a framework that addresses these challenges by leveraging a shadow model with interpretability for identifying sensitive areas.
arXiv Detail & Related papers (2025-05-30T17:58:57Z) - Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models [62.979954692036685]
We introduce PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring and semantic prompt search.<n>Our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
arXiv Detail & Related papers (2025-04-25T02:51:23Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Privacy-preserving datasets by capturing feature distributions with Conditional VAEs [0.11999555634662634]
Conditional Variational Autoencoders (CVAEs) trained on feature vectors extracted from large pre-trained vision foundation models.
Our method notably outperforms traditional approaches in both medical and natural image domains.
Results underscore the potential of generative models to significantly impact deep learning applications in data-scarce and privacy-sensitive environments.
arXiv Detail & Related papers (2024-08-01T15:26:24Z) - You Can Use But Cannot Recognize: Preserving Visual Privacy in Deep Neural Networks [29.03438707988713]
Existing privacy protection techniques are unable to efficiently protect such data.
We propose a novel privacy-preserving framework VisualMixer.
VisualMixer shuffles pixels in the spatial domain and in the chromatic channel space in the regions without injecting noises.
Experiments on real-world datasets demonstrate that VisualMixer can effectively preserve the visual privacy with negligible accuracy loss.
arXiv Detail & Related papers (2024-04-05T13:49:27Z) - PPIDSG: A Privacy-Preserving Image Distribution Sharing Scheme with GAN
in Federated Learning [2.0507547735926424]
Federated learning (FL) has attracted growing attention since it allows for privacy-preserving collaborative training on decentralized clients.
Recent works have revealed that it still has the risk of exposing private data to adversaries.
We propose a privacy-preserving image distribution sharing scheme with GAN (PPIDSG)
arXiv Detail & Related papers (2023-12-16T08:32:29Z) - RecUP-FL: Reconciling Utility and Privacy in Federated Learning via
User-configurable Privacy Defense [9.806681555309519]
Federated learning (FL) allows clients to collaboratively train a model without sharing their private data.
Recent studies have shown that private information can still be leaked through shared gradients.
We propose a user-configurable privacy defense, RecUP-FL, that can better focus on the user-specified sensitive attributes.
arXiv Detail & Related papers (2023-04-11T10:59:45Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.