Learning fair representation with a parametric integral probability
metric
- URL: http://arxiv.org/abs/2202.02943v1
- Date: Mon, 7 Feb 2022 05:02:23 GMT
- Title: Learning fair representation with a parametric integral probability
metric
- Authors: Dongha Kim, Kunwoong Kim, Insung Kong, Ilsang Ohn, and Yongdai Kim
- Abstract summary: We propose a new adversarial training scheme for learning fair representation (LFR)
In this paper, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation.
Our proposed LFR algorithm is computationally lighter and more stable, and the final prediction model is competitive or superior to other LFR algorithms.
- Score: 2.544539499281093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As they have a vital effect on social decision-making, AI algorithms should
be not only accurate but also fair. Among various algorithms for fairness AI,
learning fair representation (LFR), whose goal is to find a fair representation
with respect to sensitive variables such as gender and race, has received much
attention. For LFR, the adversarial training scheme is popularly employed as is
done in the generative adversarial network type algorithms. The choice of a
discriminator, however, is done heuristically without justification. In this
paper, we propose a new adversarial training scheme for LFR, where the integral
probability metric (IPM) with a specific parametric family of discriminators is
used. The most notable result of the proposed LFR algorithm is its theoretical
guarantee about the fairness of the final prediction model, which has not been
considered yet. That is, we derive theoretical relations between the fairness
of representation and the fairness of the prediction model built on the top of
the representation (i.e., using the representation as the input). Moreover, by
numerical experiments, we show that our proposed LFR algorithm is
computationally lighter and more stable, and the final prediction model is
competitive or superior to other LFR algorithms using more complex
discriminators.
Related papers
- Fair Representation Learning for Continuous Sensitive Attributes using Expectation of Integral Probability Metrics [4.010428370752397]
AI fairness, also known as algorithmic fairness, aims to ensure that algorithms operate without bias or discrimination towards any individual or group.<n>Among various AI algorithms, the Fair Representation Learning (FRL) approach has gained significant interest in recent years.<n>We propose a new FRL algorithm called Fair Representation using EIPM with MMD (FREM)
arXiv Detail & Related papers (2025-05-09T21:08:52Z) - Targeted Learning for Data Fairness [52.59573714151884]
We expand fairness inference by evaluating fairness in the data generating process itself.
We derive estimators demographic parity, equal opportunity, and conditional mutual information.
To validate our approach, we perform several simulations and apply our estimators to real data.
arXiv Detail & Related papers (2025-02-06T18:51:28Z) - Loss Balancing for Fair Supervised Learning [20.13250413610897]
Supervised learning models have been used in various domains such as lending, college admission, face recognition, natural language processing, etc.
Various notions have been proposed to address the unfairness predictor on the learning process (EL)
arXiv Detail & Related papers (2023-11-07T04:36:13Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - The Sharpe predictor for fairness in machine learning [0.0]
In machine learning applications, unfair predictions may discriminate against a minority group.
Most existing approaches for fair machine learning (FML) treat fairness as a constraint or a penalization term in the optimization of a ML model.
We introduce a new paradigm for FML based on Multi-Objective Optimization (SMOO), where accuracy and fairness metrics stand as conflicting objectives to be optimized simultaneously.
The Sharpe predictor for FML provides the highest prediction return (accuracy) per unit of prediction risk (unfairness).
arXiv Detail & Related papers (2021-08-13T22:22:34Z) - Fair Normalizing Flows [10.484851004093919]
We present Fair Normalizing Flows (FNF), a new approach offering more rigorous fairness guarantees for learned representations.
The main advantage of FNF is that its exact likelihood computation allows us to obtain guarantees on the maximum unfairness of any potentially adversarial downstream predictor.
We experimentally demonstrate the effectiveness of FNF in enforcing various group fairness notions, as well as other attractive properties such as interpretability and transfer learning.
arXiv Detail & Related papers (2021-06-10T17:35:59Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.