Federated Face Presentation Attack Detection
- URL: http://arxiv.org/abs/2005.14638v2
- Date: Tue, 29 Sep 2020 03:01:14 GMT
- Title: Federated Face Presentation Attack Detection
- Authors: Rui Shao, Pramuditha Perera, Pong C. Yuen, Vishal M. Patel
- Abstract summary: Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
We propose Federated Face Presentation Attack Detection (FedPAD) framework.
FedPAD simultaneously takes advantage of rich fPAD information available at different data owners while preserving data privacy.
- Score: 93.25058425356694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face presentation attack detection (fPAD) plays a critical role in the modern
face recognition pipeline. A face presentation attack detection model with good
generalization can be obtained when it is trained with face images from
different input distributions and different types of spoof attacks. In reality,
training data (both real face images and spoof images) are not directly shared
between data owners due to legal and privacy issues. In this paper, with the
motivation of circumventing this challenge, we propose Federated Face
Presentation Attack Detection (FedPAD) framework. FedPAD simultaneously takes
advantage of rich fPAD information available at different data owners while
preserving data privacy. In the proposed framework, each data owner (referred
to as \textit{data centers}) locally trains its own fPAD model. A server learns
a global fPAD model by iteratively aggregating model updates from all data
centers without accessing private data in each of them. Once the learned global
model converges, it is used for fPAD inference. We introduce the experimental
setting to evaluate the proposed FedPAD framework and carry out extensive
experiments to provide various insights about federated learning for fPAD.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - FedForgery: Generalized Face Forgery Detection with Residual Federated
Learning [87.746829550726]
Existing face forgery detection methods directly utilize the obtained public shared or centralized data for training.
The paper proposes a novel generalized residual Federated learning for face Forgery detection (FedForgery)
Experiments conducted on publicly available face forgery detection datasets prove the superior performance of the proposed FedForgery.
arXiv Detail & Related papers (2022-10-18T03:32:18Z) - ARFED: Attack-Resistant Federated averaging based on outlier elimination [0.0]
In federated learning, each participant trains its local model with its own data and a global model is formed at a trusted server.
Since the server has no effect and visibility on the training procedure of the participants to ensure privacy, the global model becomes vulnerable to attacks such as data poisoning and model poisoning.
We propose a defense algorithm called ARFED that does not make any assumptions about data distribution, update similarity of participants, or the ratio of the malicious participants.
arXiv Detail & Related papers (2021-11-08T15:00:44Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Shuffled Patch-Wise Supervision for Presentation Attack Detection [12.031796234206135]
Face anti-spoofing is essential to prevent false facial verification by using a photo, video, mask, or a different substitute for an authorized person's face.
Most presentation attack detection systems suffer from overfitting, where they achieve near-perfect scores on a single dataset but fail on a different dataset with more realistic data.
We propose a new PAD approach, which combines pixel-wise binary supervision with patch-based CNN.
arXiv Detail & Related papers (2021-09-08T08:14:13Z) - Federated Generalized Face Presentation Attack Detection [112.27662334648302]
We propose a Federated Face Presentation Attack Detection (FedPAD) framework.
FedPAD takes advantage of rich fPAD information available at different data owners while preserving data privacy.
A server learns a global fPAD model by only aggregating domain-invariant parts of the fPAD models from data centers.
arXiv Detail & Related papers (2021-04-14T02:44:53Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.