Group Fairness in Prediction-Based Decision Making: From Moral
Assessment to Implementation
- URL: http://arxiv.org/abs/2210.10456v1
- Date: Wed, 19 Oct 2022 10:44:21 GMT
- Title: Group Fairness in Prediction-Based Decision Making: From Moral
Assessment to Implementation
- Authors: Joachim Baumann, Christoph Heitz
- Abstract summary: We introduce a framework for the moral assessment of what fairness means in a given context.
We map the assessment's results to established statistical group fairness criteria.
We extend the FEC principle to cover all types of group fairness criteria.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring fairness of prediction-based decision making is based on statistical
group fairness criteria. Which one of these criteria is the morally most
appropriate one depends on the context, and its choice requires an ethical
analysis. In this paper, we present a step-by-step procedure integrating three
elements: (a) a framework for the moral assessment of what fairness means in a
given context, based on the recently proposed general principle of "Fair
equality of chances" (FEC) (b) a mapping of the assessment's results to
established statistical group fairness criteria, and (c) a method for
integrating the thus-defined fairness into optimal decision making. As a second
contribution, we show new applications of the FEC principle and show that, with
this extension, the FEC framework covers all types of group fairness criteria:
independence, separation, and sufficiency. Third, we introduce an extended
version of the FEC principle, which additionally allows accounting for morally
irrelevant elements of the fairness assessment and links to well-known
relaxations of the fairness criteria. This paper presents a framework to
develop fair decision systems in a conceptually sound way, combining the moral
and the computational elements of fair prediction-based decision-making in an
integrated approach. Data and code to reproduce our results are available at
https://github.com/joebaumann/fair-prediction-based-decision-making.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - SCALES: From Fairness Principles to Constrained Decision-Making [16.906822244101445]
We show that well-known fairness principles can be encoded either as a utility component, a non-causal component, or a causal component.
We show that our framework produces fair policies that embody alternative fairness principles in single-step and sequential decision-making scenarios.
arXiv Detail & Related papers (2022-09-22T08:44:36Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - Enforcing Group Fairness in Algorithmic Decision Making: Utility
Maximization Under Sufficiency [0.0]
This paper focuses on the fairness concepts of PPV parity, false omission rate (FOR) parity, and sufficiency.
We show that group-specific threshold rules are optimal for PPV parity and FOR parity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency.
arXiv Detail & Related papers (2022-06-05T18:47:34Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z) - Principal Fairness for Human and Algorithmic Decision-Making [1.2691047660244335]
We introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making.
Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision.
arXiv Detail & Related papers (2020-05-21T00:24:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.