FedSIS: Federated Split Learning with Intermediate Representation
Sampling for Privacy-preserving Generalized Face Presentation Attack
Detection
- URL: http://arxiv.org/abs/2308.10236v2
- Date: Tue, 22 Aug 2023 16:09:09 GMT
- Title: FedSIS: Federated Split Learning with Intermediate Representation
Sampling for Privacy-preserving Generalized Face Presentation Attack
Detection
- Authors: Naif Alkhunaizi, Koushik Srivatsan, Faris Almalik, Ibrahim Almakky,
Karthik Nandakumar
- Abstract summary: Lack of generalization to unseen domains/attacks is the Achilles heel of most face presentation attack detection (FacePAD) algorithms.
In this work, a novel framework called Federated Split learning with Intermediate representation Sampling (FedSIS) is introduced for privacy-preserving domain generalization.
- Score: 4.1897081000881045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lack of generalization to unseen domains/attacks is the Achilles heel of most
face presentation attack detection (FacePAD) algorithms. Existing attempts to
enhance the generalizability of FacePAD solutions assume that data from
multiple source domains are available with a single entity to enable
centralized training. In practice, data from different source domains may be
collected by diverse entities, who are often unable to share their data due to
legal and privacy constraints. While collaborative learning paradigms such as
federated learning (FL) can overcome this problem, standard FL methods are
ill-suited for domain generalization because they struggle to surmount the twin
challenges of handling non-iid client data distributions during training and
generalizing to unseen domains during inference. In this work, a novel
framework called Federated Split learning with Intermediate representation
Sampling (FedSIS) is introduced for privacy-preserving domain generalization.
In FedSIS, a hybrid Vision Transformer (ViT) architecture is learned using a
combination of FL and split learning to achieve robustness against statistical
heterogeneity in the client data distributions without any sharing of raw data
(thereby preserving privacy). To further improve generalization to unseen
domains, a novel feature augmentation strategy called intermediate
representation sampling is employed, and discriminative information from
intermediate blocks of a ViT is distilled using a shared adapter network. The
FedSIS approach has been evaluated on two well-known benchmarks for
cross-domain FacePAD to demonstrate that it is possible to achieve
state-of-the-art generalization performance without data sharing. Code:
https://github.com/Naiftt/FedSIS
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Differentially Private Federated Learning on Heterogeneous Data [10.431137628048356]
Federated Learning (FL) is a paradigm for large-scale distributed learning.
It faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.
We propose a novel FL approach to tackle these two challenges together by incorporating Differential Privacy (DP) constraints.
arXiv Detail & Related papers (2021-11-17T18:23:49Z) - Collaborative Semantic Aggregation and Calibration for Federated Domain
Generalization [28.573872986524794]
DG aims to learn from multiple known source domains a model that can generalize well to unknown target domains.
In this paper, we tackle the problem of federated domain generalization where the source datasets can only be accessed locally.
We conduct data-free semantic aggregation by fusing the models trained on separated domains layer-by-layer.
arXiv Detail & Related papers (2021-10-13T14:08:29Z) - Generalizable Person Re-identification with Relevance-aware Mixture of
Experts [45.13716166680772]
We propose a novel method called the relevance-aware mixture of experts (RaMoE)
RaMoE uses an effective voting-based mixture mechanism to dynamically leverage source domains' diverse characteristics to improve the model's generalization.
Considering the target domains' invisibility during training, we propose a novel learning-to-learn algorithm combined with our relation alignment loss to update the voting network.
arXiv Detail & Related papers (2021-05-19T14:19:34Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.