Fairness in Contextual Resource Allocation Systems: Metrics and
Incompatibility Results
- URL: http://arxiv.org/abs/2212.01725v1
- Date: Sun, 4 Dec 2022 02:30:58 GMT
- Title: Fairness in Contextual Resource Allocation Systems: Metrics and
Incompatibility Results
- Authors: Nathanael Jo, Bill Tang, Kathryn Dullerud, Sina Aghaei, Eric Rice,
Phebe Vayanos
- Abstract summary: We study systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing.
These systems often support communities disproportionately affected by systemic racial, gender, or other injustices.
We propose a framework for evaluating fairness in contextual resource allocation systems inspired by fairness metrics in machine learning.
- Score: 7.705334602362225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study critical systems that allocate scarce resources to satisfy basic
needs, such as homeless services that provide housing. These systems often
support communities disproportionately affected by systemic racial, gender, or
other injustices, so it is crucial to design these systems with fairness
considerations in mind. To address this problem, we propose a framework for
evaluating fairness in contextual resource allocation systems that is inspired
by fairness metrics in machine learning. This framework can be applied to
evaluate the fairness properties of a historical policy, as well as to impose
constraints in the design of new (counterfactual) allocation policies. Our work
culminates with a set of incompatibility results that investigate the interplay
between the different fairness metrics we propose. Notably, we demonstrate
that: 1) fairness in allocation and fairness in outcomes are usually
incompatible; 2) policies that prioritize based on a vulnerability score will
usually result in unequal outcomes across groups, even if the score is
perfectly calibrated; 3) policies using contextual information beyond what is
needed to characterize baseline risk and treatment effects can be fairer in
their outcomes than those using just baseline risk and treatment effects; and
4) policies using group status in addition to baseline risk and treatment
effects are as fair as possible given all available information. Our framework
can help guide the discussion among stakeholders in deciding which fairness
metrics to impose when allocating scarce resources.
Related papers
- Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - The Role of Relevance in Fair Ranking [1.5469452301122177]
We argue that relevance scores should satisfy a set of desired criteria in order to guide fairness interventions.
We then empirically show that not all of these criteria are met in a case study of relevance inferred from biased user click data.
Our analyses and results surface the pressing need for new approaches to relevance collection and generation.
arXiv Detail & Related papers (2023-05-09T16:58:23Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - The Limits of Computation in Solving Equity Trade-Offs in Machine
Learning and Justice System Risk Assessment [0.0]
This paper explores how different ideas of racial equity in machine learning, in justice settings in particular, can present trade-offs that are difficult to solve computationally.
arXiv Detail & Related papers (2021-02-08T16:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.