Interpretable Distribution-Invariant Fairness Measures for Continuous
Scores
- URL: http://arxiv.org/abs/2308.11375v1
- Date: Tue, 22 Aug 2023 12:01:49 GMT
- Title: Interpretable Distribution-Invariant Fairness Measures for Continuous
Scores
- Authors: Ann-Kristin Becker, Oana Dumitrasc, Klaus Broelemann
- Abstract summary: We propose a distributionally invariant version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance.
Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities.
We show that the proposed distributionally invariant fairness measures outperform ROC-based fairness measures because they are more explicit and can quantify significant biases that ROC-based fairness measures miss.
- Score: 4.711430413139392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Measures of algorithmic fairness are usually discussed in the context of
binary decisions. We extend the approach to continuous scores. So far,
ROC-based measures have mainly been suggested for this purpose. Other existing
methods depend heavily on the distribution of scores, are unsuitable for
ranking tasks, or their effect sizes are not interpretable. Here, we propose a
distributionally invariant version of fairness measures for continuous scores
with a reasonable interpretation based on the Wasserstein distance. Our
measures are easily computable and well suited for quantifying and interpreting
the strength of group disparities as well as for comparing biases across
different models, datasets, or time points. We derive a link between the
different families of existing fairness measures for scores and show that the
proposed distributionally invariant fairness measures outperform ROC-based
fairness measures because they are more explicit and can quantify significant
biases that ROC-based fairness measures miss. Finally, we demonstrate their
effectiveness through experiments on the most commonly used fairness benchmark
datasets.
Related papers
- Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly [2.002741592555996]
Existing techniques for assessing the discrimination level of machine learning models include commonly used group and individual fairness measures.
We propose a "harmonic fairness measure via manifold (HFM)" based on distances between sets.
Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.
arXiv Detail & Related papers (2024-05-15T11:07:40Z) - Dr. FERMI: A Stochastic Distributionally Robust Fair Empirical Risk
Minimization Framework [12.734559823650887]
In the presence of distribution shifts, fair machine learning models may behave unfairly on test data.
Existing algorithms require full access to data and cannot be used when small batches are used.
This paper proposes the first distributionally robust fairness framework with convergence guarantees that do not require knowledge of the causal graph.
arXiv Detail & Related papers (2023-09-20T23:25:28Z) - Counterpart Fairness -- Addressing Systematic between-group Differences
in Fairness Evaluation [18.372355677006965]
We develop a propensity-score-based method for identifying counterparts, which prevents fairness evaluation from comparing "oranges" with "apples"
We propose a counterpart-based statistical fairness index, termed Counterpart-Fairness (CFair), to assess fairness of machine learning models.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Disentanglement of Correlated Factors via Hausdorff Factorized Support [53.23740352226391]
We propose a relaxed disentanglement criterion - the Hausdorff Factorized Support (HFS) criterion - that encourages a factorized support, rather than a factorial distribution.
We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks.
arXiv Detail & Related papers (2022-10-13T20:46:42Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Algorithmic Fairness Verification with Graphical Models [24.8005399877574]
We propose an efficient fairness verifier, called FVGM, that encodes correlations among features as a Bayesian network.
We show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms.
arXiv Detail & Related papers (2021-09-20T12:05:14Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Fast Fair Regression via Efficient Approximations of Mutual Information [0.0]
This paper introduces fast approximations of the independence, separation and sufficiency group fairness criteria for regression models.
It uses such approximations as regularisers to enforce fairness within a regularised risk minimisation framework.
Experiments in real-world datasets indicate that in spite of its superior computational efficiency our algorithm still displays state-of-the-art accuracy/fairness tradeoffs.
arXiv Detail & Related papers (2020-02-14T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.