Fair Classification via Domain Adaptation: A Dual Adversarial Learning
Approach
- URL: http://arxiv.org/abs/2206.03656v2
- Date: Tue, 30 May 2023 20:07:38 GMT
- Title: Fair Classification via Domain Adaptation: A Dual Adversarial Learning
Approach
- Authors: Yueqing Liang, Canyu Chen, Tian Tian, Kai Shu
- Abstract summary: We study a novel problem of exploring domain adaptation for fair classification.
We propose a new framework that can learn to adapt the sensitive attributes from a source domain for fair classification in the target domain.
- Score: 14.344142985726853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern machine learning (ML) models are becoming increasingly popular and are
widely used in decision-making systems. However, studies have shown critical
issues of ML discrimination and unfairness, which hinder their adoption on
high-stake applications. Recent research on fair classifiers has drawn
significant attention to developing effective algorithms to achieve fairness
and good classification performance. Despite the great success of these
fairness-aware machine learning models, most of the existing models require
sensitive attributes to pre-process the data, regularize the model learning or
post-process the prediction to have fair predictions. However, sensitive
attributes are often incomplete or even unavailable due to privacy, legal or
regulation restrictions. Though we lack the sensitive attribute for training a
fair model in the target domain, there might exist a similar domain that has
sensitive attributes. Thus, it is important to exploit auxiliary information
from a similar domain to help improve fair classification in the target domain.
Therefore, in this paper, we study a novel problem of exploring domain
adaptation for fair classification. We propose a new framework that can learn
to adapt the sensitive attributes from a source domain for fair classification
in the target domain. Extensive experiments on real-world datasets illustrate
the effectiveness of the proposed model for fair classification, even when no
sensitive attributes are available in the target domain.
Related papers
- FEED: Fairness-Enhanced Meta-Learning for Domain Generalization [13.757379847454372]
Generalizing to out-of-distribution data while aware of model fairness is a significant and challenging problem in meta-learning.
This paper introduces an approach to fairness-aware meta-learning that significantly enhances domain generalization capabilities.
arXiv Detail & Related papers (2024-11-02T17:34:33Z) - Towards Counterfactual Fairness-aware Domain Generalization in Changing Environments [30.37748667235682]
We introduce an innovative framework called Counterfactual Fairness-Aware Domain Generalization with Sequential Autoencoder (CDSAE)
This approach effectively separates environmental information and sensitive attributes from the embedded representation of classification features.
By incorporating fairness regularization, we exclusively employ semantic information for classification purposes.
arXiv Detail & Related papers (2023-09-22T17:08:20Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Learning Fair Models without Sensitive Attributes: A Generative Approach [33.196044483534784]
We study a novel problem of learning fair models without sensitive attributes by exploring relevant features.
We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data.
Experimental results on real-world datasets show the effectiveness of our framework.
arXiv Detail & Related papers (2022-03-30T15:54:30Z) - You Can Still Achieve Fairness Without Sensitive Attributes: Exploring
Biases in Non-Sensitive Features [29.94644351343916]
We propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair.
Experimental results on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-04-29T17:52:11Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.