Privacy-preserving Federated Adversarial Domain Adaption over Feature
Groups for Interpretability
- URL: http://arxiv.org/abs/2111.10934v1
- Date: Mon, 22 Nov 2021 01:13:43 GMT
- Title: Privacy-preserving Federated Adversarial Domain Adaption over Feature
Groups for Interpretability
- Authors: Yan Kang, Yang Liu, Yuezhou Wu, Guoqiang Ma, Qiang Yang
- Abstract summary: PrADA is a privacy-preserving adversarial domain adaptation approach.
We exploit domain expertise to split the feature space into multiple groups that each holds relevant features.
We learn a semantically meaningful high-order feature from each feature group.
- Score: 12.107058397549771
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel privacy-preserving federated adversarial domain adaptation
approach ($\textbf{PrADA}$) to address an under-studied but practical
cross-silo federated domain adaptation problem, in which the party of the
target domain is insufficient in both samples and features. We address the
lack-of-feature issue by extending the feature space through vertical federated
learning with a feature-rich party and tackle the sample-scarce issue by
performing adversarial domain adaptation from the sample-rich source party to
the target party. In this work, we focus on financial applications where
interpretability is critical. However, existing adversarial domain adaptation
methods typically apply a single feature extractor to learn feature
representations that are low-interpretable with respect to the target task. To
improve interpretability, we exploit domain expertise to split the feature
space into multiple groups that each holds relevant features, and we learn a
semantically meaningful high-order feature from each feature group. In
addition, we apply a feature extractor (along with a domain discriminator) for
each feature group to enable a fine-grained domain adaptation. We design a
secure protocol that enables performing the PrADA in a secure and efficient
manner. We evaluate our approach on two tabular datasets. Experiments
demonstrate both the effectiveness and practicality of our approach.
Related papers
- AIR-DA: Adversarial Image Reconstruction for Unsupervised Domain
Adaptive Object Detection [28.22783703278792]
Adrial Image Reconstruction (AIR) as the regularizer to facilitate the adversarial training of the feature extractor.
Our evaluations across several datasets of challenging domain shifts demonstrate that the proposed method outperforms all previous methods.
arXiv Detail & Related papers (2023-03-27T16:51:51Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - ToAlign: Task-oriented Alignment for Unsupervised Domain Adaptation [84.90801699807426]
We study what features should be aligned across domains and propose to make the domain alignment proactively serve classification.
We explicitly decompose a feature in the source domain intoa task-related/discriminative feature that should be aligned, and a task-irrelevant feature that should be avoided/ignored.
arXiv Detail & Related papers (2021-06-21T02:17:48Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.