FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning
- URL: http://arxiv.org/abs/2206.03200v2
- Date: Mon, 31 Oct 2022 09:18:34 GMT
- Title: FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning
- Authors: Tao Qi, Fangzhao Wu, Chuhan Wu, Lingjuan Lyu, Tong Xu, Zhongliang
Yang, Yongfeng Huang, Xing Xie
- Abstract summary: We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
- Score: 102.92349569788028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical federated learning (VFL) is a privacy-preserving machine learning
paradigm that can learn models from features distributed on different platforms
in a privacy-preserving way. Since in real-world applications the data may
contain bias on fairness-sensitive features (e.g., gender), VFL models may
inherit bias from training data and become unfair for some user groups.
However, existing fair machine learning methods usually rely on the centralized
storage of fairness-sensitive features to achieve model fairness, which are
usually inapplicable in federated scenarios. In this paper, we propose a fair
vertical federated learning framework (FairVFL), which can improve the fairness
of VFL models. The core idea of FairVFL is to learn unified and fair
representations of samples based on the decentralized feature fields in a
privacy-preserving way. Specifically, each platform with fairness-insensitive
features first learns local data representations from local features. Then,
these local representations are uploaded to a server and aggregated into a
unified representation for the target task. In order to learn a fair unified
representation, we send it to each platform storing fairness-sensitive features
and apply adversarial learning to remove bias from the unified representation
inherited from the biased data. Moreover, for protecting user privacy, we
further propose a contrastive adversarial learning method to remove private
information from the unified representation in server before sending it to the
platforms keeping fairness-sensitive features. Experiments on three real-world
datasets validate that our method can effectively improve model fairness with
user privacy well-protected.
Related papers
- Decoupled Vertical Federated Learning for Practical Training on
Vertically Partitioned Data [9.84489449520821]
We propose a blockwise learning approach to Vertical Federated Learning (VFL)
In VFL, a host client owns data labels for each entity and learns a final representation based on intermediate local representations from all guest clients.
We implement DVFL to train split neural networks and show that model performance is comparable to VFL on a variety of classification datasets.
arXiv Detail & Related papers (2024-03-06T17:23:28Z) - Fair Differentially Private Federated Learning Framework [0.0]
Federated learning (FL) is a distributed machine learning strategy that enables participants to collaborate and train a shared model without sharing their individual datasets.
Privacy and fairness are crucial considerations in FL.
This paper presents a framework that addresses the challenges of generating a fair global model without validation data and creating a globally private differential model.
arXiv Detail & Related papers (2023-05-23T09:58:48Z) - Can Public Large Language Models Help Private Cross-device Federated Learning? [58.05449579773249]
We study (differentially) private federated learning (FL) of language models.
Public data has been used to improve privacy-utility trade-offs for both large and small language models.
We propose a novel distribution matching algorithm with theoretical grounding to sample public data close to private data distribution.
arXiv Detail & Related papers (2023-05-20T07:55:58Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - Enforcing fairness in private federated learning via the modified method
of differential multipliers [1.3381749415517021]
Federated learning with differential privacy, or private federated learning, provides a strategy to train machine learning models while respecting users' privacy.
This paper introduces an algorithm to enforce group fairness in private federated learning, where users' data does not leave their devices.
arXiv Detail & Related papers (2021-09-17T15:28:47Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.