A Ranking Approach to Fair Classification
- URL: http://arxiv.org/abs/2102.04565v1
- Date: Mon, 8 Feb 2021 22:51:12 GMT
- Title: A Ranking Approach to Fair Classification
- Authors: Jakob Schoeffer, Niklas Kuehl, Isabel Valera
- Abstract summary: Algorithmic decision systems are increasingly used in areas such as hiring, school admission, or loan approval.
In many scenarios, ground-truth labels are unavailable, and instead we have only access to imperfect labels as the result of human-made decisions.
We propose a new fair ranking-based decision system, as an alternative to traditional classification algorithms.
- Score: 11.35838396538348
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Algorithmic decision systems are increasingly used in areas such as hiring,
school admission, or loan approval. Typically, these systems rely on labeled
data for training a classification model. However, in many scenarios,
ground-truth labels are unavailable, and instead we have only access to
imperfect labels as the result of (potentially biased) human-made decisions.
Despite being imperfect, historical decisions often contain some useful
information on the unobserved true labels. In this paper, we focus on scenarios
where only imperfect labels are available and propose a new fair ranking-based
decision system, as an alternative to traditional classification algorithms.
Our approach is both intuitive and easy to implement, and thus particularly
suitable for adoption in real-world settings. More in detail, we introduce a
distance-based decision criterion, which incorporates useful information from
historical decisions and accounts for unwanted correlation between protected
and legitimate features. Through extensive experiments on synthetic and
real-world data, we show that our method is fair, as it a) assigns the
desirable outcome to the most qualified individuals, and b) removes the effect
of stereotypes in decision-making, thereby outperforming traditional
classification algorithms. Additionally, we are able to show theoretically that
our method is consistent with a prominent concept of individual fairness which
states that "similar individuals should be treated similarly."
Related papers
- Learning with Complementary Labels Revisited: The Selected-Completely-at-Random Setting Is More Practical [66.57396042747706]
Complementary-label learning is a weakly supervised learning problem.
We propose a consistent approach that does not rely on the uniform distribution assumption.
We find that complementary-label learning can be expressed as a set of negative-unlabeled binary classification problems.
arXiv Detail & Related papers (2023-11-27T02:59:17Z) - Identifying Reasons for Bias: An Argumentation-Based Approach [2.9465623430708905]
We propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals.
We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.
arXiv Detail & Related papers (2023-10-25T09:47:15Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Partial-Label Regression [54.74984751371617]
Partial-label learning is a weakly supervised learning setting that allows each training example to be annotated with a set of candidate labels.
Previous studies on partial-label learning only focused on the classification setting where candidate labels are all discrete.
In this paper, we provide the first attempt to investigate partial-label regression, where each training example is annotated with a set of real-valued candidate labels.
arXiv Detail & Related papers (2023-06-15T09:02:24Z) - Don't Throw it Away! The Utility of Unlabeled Data in Fair Decision
Making [14.905698014932488]
We propose a novel method based on a variational autoencoder for practical fair decision-making.
Our method learns an unbiased data representation leveraging both labeled and unlabeled data.
Our method converges to the optimal (fair) policy according to the ground-truth with low variance.
arXiv Detail & Related papers (2022-05-10T10:33:11Z) - Learning with Proper Partial Labels [87.65718705642819]
Partial-label learning is a kind of weakly-supervised learning with inexact labels.
We show that this proper partial-label learning framework includes many previous partial-label learning settings.
We then derive a unified unbiased estimator of the classification risk.
arXiv Detail & Related papers (2021-12-23T01:37:03Z) - Estimation of Fair Ranking Metrics with Incomplete Judgments [70.37717864975387]
We propose a sampling strategy and estimation technique for four fair ranking metrics.
We formulate a robust and unbiased estimator which can operate even with very limited number of labeled items.
arXiv Detail & Related papers (2021-08-11T10:57:00Z) - Beyond traditional assumptions in fair machine learning [5.029280887073969]
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
arXiv Detail & Related papers (2021-01-29T09:02:15Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.