Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation
- URL: http://arxiv.org/abs/2110.12613v1
- Date: Mon, 25 Oct 2021 02:51:05 GMT
- Title: Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation
- Authors: Rui Shao, Bochao Zhang, Pong C. Yuen, Vishal M. Patel
- Abstract summary: Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
- Score: 100.69458267888962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face presentation attack detection (fPAD) plays a critical role in the modern
face recognition pipeline. The generalization ability of face presentation
attack detection models to unseen attacks has become a key issue for real-world
deployment, which can be improved when models are trained with face images from
different input distributions and different types of spoof attacks. In reality,
due to legal and privacy issues, training data (both real face images and spoof
images) are not allowed to be directly shared between different data sources.
In this paper, to circumvent this challenge, we propose a Federated Test-Time
Adaptive Face Presentation Attack Detection with Dual-Phase Privacy
Preservation framework, with the aim of enhancing the generalization ability of
fPAD models in both training and testing phase while preserving data privacy.
In the training phase, the proposed framework exploits the federated learning
technique, which simultaneously takes advantage of rich fPAD information
available at different data sources by aggregating model updates from them
without accessing their private data. To further boost the generalization
ability, in the testing phase, we explore test-time adaptation by minimizing
the entropy of fPAD model prediction on the testing data, which alleviates the
domain gap between training and testing data and thus reduces the
generalization error of a fPAD model. We introduce the experimental setting to
evaluate the proposed framework and carry out extensive experiments to provide
various insights about the proposed method for fPAD.
Related papers
- Unsupervised Fingerphoto Presentation Attack Detection With Diffusion Models [8.979820109339286]
Smartphone-based contactless fingerphoto authentication has become a reliable alternative to traditional contact-based fingerprint biometric systems.
Despite its convenience, fingerprint authentication through fingerphotos is more vulnerable to presentation attacks.
We propose a novel unsupervised approach based on a state-of-the-art deep-learning-based diffusion model, the Denoising Probabilistic Diffusion Model (DDPM)
The proposed approach detects Presentation Attacks (PA) by calculating the reconstruction similarity between the input and output pairs of the DDPM.
arXiv Detail & Related papers (2024-09-27T11:07:48Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Disentangled Representation with Dual-stage Feature Learning for Face
Anti-spoofing [18.545438302664756]
It is essential to learn more generalized and discriminative features to prevent overfitting to pre-defined spoof attack types.
This paper proposes a novel dual-stage disentangled representation learning method that can efficiently untangle spoof-related features from irrelevant ones.
arXiv Detail & Related papers (2021-10-18T10:22:52Z) - Federated Generalized Face Presentation Attack Detection [112.27662334648302]
We propose a Federated Face Presentation Attack Detection (FedPAD) framework.
FedPAD takes advantage of rich fPAD information available at different data owners while preserving data privacy.
A server learns a global fPAD model by only aggregating domain-invariant parts of the fPAD models from data centers.
arXiv Detail & Related papers (2021-04-14T02:44:53Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Federated Face Presentation Attack Detection [93.25058425356694]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
We propose Federated Face Presentation Attack Detection (FedPAD) framework.
FedPAD simultaneously takes advantage of rich fPAD information available at different data owners while preserving data privacy.
arXiv Detail & Related papers (2020-05-29T15:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.