Federated Learning Method for Preserving Privacy in Face Recognition
System
- URL: http://arxiv.org/abs/2403.05344v1
- Date: Fri, 8 Mar 2024 14:21:43 GMT
- Title: Federated Learning Method for Preserving Privacy in Face Recognition
System
- Authors: Enoch Solomon, and Abraham Woubie
- Abstract summary: We explore the application of federated learning, both with and without secure aggregators, in the context of supervised and unsupervised face recognition systems.
In our proposed system, each edge device independently trains its own model, which is transmitted either to a secure aggregator or directly to the central server.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The state-of-the-art face recognition systems are typically trained on a
single computer, utilizing extensive image datasets collected from various
number of users. However, these datasets often contain sensitive personal
information that users may hesitate to disclose. To address potential privacy
concerns, we explore the application of federated learning, both with and
without secure aggregators, in the context of both supervised and unsupervised
face recognition systems. Federated learning facilitates the training of a
shared model without necessitating the sharing of individual private data,
achieving this by training models on decentralized edge devices housing the
data. In our proposed system, each edge device independently trains its own
model, which is subsequently transmitted either to a secure aggregator or
directly to the central server. To introduce diverse data without the need for
data transmission, we employ generative adversarial networks to generate
imposter data at the edge. Following this, the secure aggregator or central
server combines these individual models to construct a global model, which is
then relayed back to the edge devices. Experimental findings based on the
CelebA datasets reveal that employing federated learning in both supervised and
unsupervised face recognition systems offers dual benefits. Firstly, it
safeguards privacy since the original data remains on the edge devices.
Secondly, the experimental results demonstrate that the aggregated model yields
nearly identical performance compared to the individual models, particularly
when the federated model does not utilize a secure aggregator. Hence, our
results shed light on the practical challenges associated with
privacy-preserving face image training, particularly in terms of the balance
between privacy and accuracy.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - Uncertainty-Autoencoder-Based Privacy and Utility Preserving Data Type
Conscious Transformation [3.7315964084413173]
We propose an adversarial learning framework that deals with the privacy-utility tradeoff problem under two conditions.
Under data-type ignorant conditions, the privacy mechanism provides a one-hot encoding of categorical features, representing exactly one class.
Under data-type aware conditions, the categorical variables are represented by a collection of scores, one for each class.
arXiv Detail & Related papers (2022-05-04T08:40:15Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Joint Optimization in Edge-Cloud Continuum for Federated Unsupervised
Person Re-identification [24.305773593017932]
FedUReID is a federated unsupervised person ReID system to learn person ReID models without any labels while preserving privacy.
To tackle the problem that edges vary in data volumes and distributions, we personalize training in edges with joint optimization of cloud and edge.
Experiments on eight person ReID datasets demonstrate that FedUReID achieves higher accuracy but also reduces computation cost by 29%.
arXiv Detail & Related papers (2021-08-14T08:35:55Z) - SCEI: A Smart-Contract Driven Edge Intelligence Framework for IoT
Systems [15.796325306292134]
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution.
This paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning.
arXiv Detail & Related papers (2021-03-12T02:57:05Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.