Fair Supervised Learning with A Simple Random Sampler of Sensitive
Attributes
- URL: http://arxiv.org/abs/2311.05866v2
- Date: Sat, 9 Mar 2024 06:31:46 GMT
- Title: Fair Supervised Learning with A Simple Random Sampler of Sensitive
Attributes
- Authors: Jinwon Sohn, Qifan Song, Guang Lin
- Abstract summary: This work proposes fairness penalties learned by neural networks with a simple random sampler of sensitive attributes for non-discriminatory supervised learning.
We build a computationally efficient group-level in-processing fairness-aware training framework.
Empirical evidence shows that our framework enjoys better utility and fairness measures on popular benchmark data sets than competing methods.
- Score: 13.988497790151651
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the data-driven decision process becomes dominating for industrial
applications, fairness-aware machine learning arouses great attention in
various areas. This work proposes fairness penalties learned by neural networks
with a simple random sampler of sensitive attributes for non-discriminatory
supervised learning. In contrast to many existing works that critically rely on
the discreteness of sensitive attributes and response variables, the proposed
penalty is able to handle versatile formats of the sensitive attributes, so it
is more extensively applicable in practice than many existing algorithms. This
penalty enables us to build a computationally efficient group-level
in-processing fairness-aware training framework. Empirical evidence shows that
our framework enjoys better utility and fairness measures on popular benchmark
data sets than competing methods. We also theoretically characterize estimation
errors and loss of utility of the proposed neural-penalized risk minimization
problem.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - A Sequentially Fair Mechanism for Multiple Sensitive Attributes [0.46040036610482665]
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score.
We propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features.
Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness.
arXiv Detail & Related papers (2023-09-12T22:31:57Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Fair Infinitesimal Jackknife: Mitigating the Influence of Biased
Training Data Points Without Refitting [41.96570350954332]
We propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points.
We find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric.
arXiv Detail & Related papers (2022-12-13T18:36:19Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.