Gradual (In)Compatibility of Fairness Criteria
- URL: http://arxiv.org/abs/2109.04399v1
- Date: Thu, 9 Sep 2021 16:37:30 GMT
- Title: Gradual (In)Compatibility of Fairness Criteria
- Authors: Corinna Hertweck and Tim R\"az
- Abstract summary: Impossibility results show that important fairness measures cannot be satisfied at the same time under reasonable assumptions.
This paper explores whether we can satisfy and/or improve these fairness measures simultaneously to a certain degree.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Impossibility results show that important fairness measures (independence,
separation, sufficiency) cannot be satisfied at the same time under reasonable
assumptions. This paper explores whether we can satisfy and/or improve these
fairness measures simultaneously to a certain degree. We introduce
information-theoretic formulations of the fairness measures and define degrees
of fairness based on these formulations. The information-theoretic formulations
suggest unexplored theoretical relations between the three fairness measures.
In the experimental part, we use the information-theoretic expressions as
regularizers to obtain fairness-regularized predictors for three standard
datasets. Our experiments show that a) fairness regularization directly
increases fairness measures, in line with existing work, and b) some fairness
regularizations indirectly increase other fairness measures, as suggested by
our theoretical findings. This establishes that it is possible to increase the
degree to which some fairness measures are satisfied at the same time -- some
fairness measures are gradually compatible.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Goodhart's Law Applies to NLP's Explanation Benchmarks [57.26445915212884]
We critically examine two sets of metrics: the ERASER metrics (comprehensiveness and sufficiency) and the EVAL-X metrics.
We show that we can inflate a model's comprehensiveness and sufficiency scores dramatically without altering its predictions or explanations on in-distribution test inputs.
Our results raise doubts about the ability of current metrics to guide explainability research, underscoring the need for a broader reassessment of what precisely these metrics are intended to capture.
arXiv Detail & Related papers (2023-08-28T03:03:03Z) - Standardized Interpretable Fairness Measures for Continuous Risk Scores [4.192037827105842]
We propose a standardized version of fairness measures for continuous scores with a reasonable interpretation based on the Wasserstein distance.
Our measures are easily computable and well suited for quantifying and interpreting the strength of group disparities as well as for comparing biases across different models, datasets, or time points.
arXiv Detail & Related papers (2023-08-22T12:01:49Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Maximal Fairness [13.542616958246725]
"Impossibility Theorem" states that satisfying a certain combination of fairness measures is impossible.
This work identifies maximal sets of commonly used fairness measures that can be simultaneously satisfied.
In total 12 maximal sets of these fairness measures are possible, among which seven combinations of two measures, and five combinations of three measures.
arXiv Detail & Related papers (2023-04-12T12:28:44Z) - Increasing Fairness via Combination with Learning Guarantees [8.314000998551865]
We propose a fairness quality measure named discriminative risk to reflect both individual and group fairness aspects.
We also propose first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
arXiv Detail & Related papers (2023-01-25T20:31:06Z) - Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
Processes [52.92110730286403]
It is commonly believed that the marginal likelihood should be reminiscent of cross-validation metrics and that both should deteriorate with larger input dimensions.
We prove that by tuning hyper parameters, the performance, as measured by the marginal likelihood, improves monotonically with the input dimension.
We also prove that cross-validation metrics exhibit qualitatively different behavior that is characteristic of double descent.
arXiv Detail & Related papers (2022-10-14T08:09:33Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fair Mixup: Fairness via Interpolation [28.508444261249423]
We propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.
We show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.
We empirically show that it ensures a better generalization for both accuracy and fairness measurement in benchmarks.
arXiv Detail & Related papers (2021-03-11T06:57:26Z) - Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research [2.6397379133308214]
We argue that such assumptions, which are often left implicit and unexamined, lead to inconsistent conclusions.
While the intended goal of this work may be to improve the fairness of machine learning models, these unexamined, implicit assumptions can in fact result in emergent unfairness.
arXiv Detail & Related papers (2021-02-01T22:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.