Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
- URL: http://arxiv.org/abs/2403.10558v2
- Date: Tue, 23 Apr 2024 09:59:42 GMT
- Title: Adaptive Hybrid Masking Strategy for Privacy-Preserving Face Recognition Against Model Inversion Attack
- Authors: Yinggui Wang, Yuanqing Huang, Jianshu Li, Le Yang, Kai Song, Lei Wang,
- Abstract summary: This paper introduces an adaptive hybrid masking algorithm against model inversion attacks (MIA)
Specifically, face images are masked in the frequency domain using an adaptive MixUp strategy.
Experimental results demonstrate that our proposed hybrid masking scheme outperforms existing defense algorithms in terms of privacy preservation and recognition accuracy against MIA.
- Score: 7.82336679905826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The utilization of personal sensitive data in training face recognition (FR) models poses significant privacy concerns, as adversaries can employ model inversion attacks (MIA) to infer the original training data. Existing defense methods, such as data augmentation and differential privacy, have been employed to mitigate this issue. However, these methods often fail to strike an optimal balance between privacy and accuracy. To address this limitation, this paper introduces an adaptive hybrid masking algorithm against MIA. Specifically, face images are masked in the frequency domain using an adaptive MixUp strategy. Unlike the traditional MixUp algorithm, which is predominantly used for data augmentation, our modified approach incorporates frequency domain mixing. Previous studies have shown that increasing the number of images mixed in MixUp can enhance privacy preservation but at the expense of reduced face recognition accuracy. To overcome this trade-off, we develop an enhanced adaptive MixUp strategy based on reinforcement learning, which enables us to mix a larger number of images while maintaining satisfactory recognition accuracy. To optimize privacy protection, we propose maximizing the reward function (i.e., the loss function of the FR system) during the training of the strategy network. While the loss function of the FR network is minimized in the phase of training the FR network. The strategy network and the face recognition network can be viewed as antagonistic entities in the training process, ultimately reaching a more balanced trade-off. Experimental results demonstrate that our proposed hybrid masking scheme outperforms existing defense algorithms in terms of privacy preservation and recognition accuracy against MIA.
Related papers
- A visualization method for data domain changes in CNN networks and the optimization method for selecting thresholds in classification tasks [1.1118946307353794]
Face Anti-Spoofing (FAS) has played a crucial role in preserving the security of face recognition technology.
With the rise of counterfeit face generation techniques, the challenge posed by digitally edited faces to face anti-spoofing is escalating.
We propose a visualization method that intuitively reflects the training outcomes of models by visualizing the prediction results on datasets.
arXiv Detail & Related papers (2024-04-19T03:12:17Z) - Salience-Based Adaptive Masking: Revisiting Token Dynamics for Enhanced Pre-training [33.39585710223628]
Saliency-Based Adaptive Masking improves pre-training performance of MIM approaches by prioritizing token salience.
We show that our method significantly improves over the state-of-the-art in mask-based pre-training on the ImageNet-1K dataset.
arXiv Detail & Related papers (2024-04-12T08:38:51Z) - Improving Adversarial Robustness of Masked Autoencoders via Test-time
Frequency-domain Prompting [133.55037976429088]
We investigate the adversarial robustness of vision transformers equipped with BERT pretraining (e.g., BEiT, MAE)
A surprising observation is that MAE has significantly worse adversarial robustness than other BERT pretraining methods.
We propose a simple yet effective way to boost the adversarial robustness of MAE.
arXiv Detail & Related papers (2023-08-20T16:27:17Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Private, Efficient, and Accurate: Protecting Models Trained by
Multi-party Learning with Differential Privacy [8.8480262507008]
We propose PEA (Private, Efficient, Accurate), which consists of a secure DPSGD protocol and two optimization methods.
We implement PEA in two open-source MPL frameworks: TF-Encrypted and Queqiao.
Experiments show that PEA can train a differentially private classification model with an accuracy of 88% for CIFAR-10 within 7 minutes under the LAN setting.
arXiv Detail & Related papers (2022-08-18T06:48:25Z) - Federated Learning for Face Recognition with Gradient Correction [52.896286647898386]
In this work, we introduce a framework, FedGC, to tackle federated learning for face recognition.
We show that FedGC constitutes a valid loss function similar to standard softmax.
arXiv Detail & Related papers (2021-12-14T09:19:29Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Practical and Secure Federated Recommendation with Personalized Masks [24.565751694946062]
Federated recommendation is a new notion of private distributed recommender systems.
Current recommender systems mainly utilize homomorphic encryption and differential privacy methods.
In this paper, we propose a new federated recommendation framework, named federated masked matrix factorization.
arXiv Detail & Related papers (2021-08-18T07:12:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.