The Limits of Computation in Solving Equity Trade-Offs in Machine
Learning and Justice System Risk Assessment
- URL: http://arxiv.org/abs/2102.04342v1
- Date: Mon, 8 Feb 2021 16:46:29 GMT
- Title: The Limits of Computation in Solving Equity Trade-Offs in Machine
Learning and Justice System Risk Assessment
- Authors: Jesse Russell
- Abstract summary: This paper explores how different ideas of racial equity in machine learning, in justice settings in particular, can present trade-offs that are difficult to solve computationally.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper explores how different ideas of racial equity in machine learning,
in justice settings in particular, can present trade-offs that are difficult to
solve computationally. Machine learning is often used in justice settings to
create risk assessments, which are used to determine interventions, resources,
and punitive actions. Overall aspects and performance of these machine
learning-based tools, such as distributions of scores, outcome rates by levels,
and the frequency of false positives and true positives, can be problematic
when examined by racial group. Models that produce different distributions of
scores or produce a different relationship between level and outcome are
problematic when those scores and levels are directly linked to the restriction
of individual liberty and to the broader context of racial inequity. While
computation can help highlight these aspects, data and computation are unlikely
to solve them. This paper explores where values and mission might have to fill
the spaces computation leaves.
Related papers
- The Role of Relevance in Fair Ranking [1.5469452301122177]
We argue that relevance scores should satisfy a set of desired criteria in order to guide fairness interventions.
We then empirically show that not all of these criteria are met in a case study of relevance inferred from biased user click data.
Our analyses and results surface the pressing need for new approaches to relevance collection and generation.
arXiv Detail & Related papers (2023-05-09T16:58:23Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Fairness in Contextual Resource Allocation Systems: Metrics and
Incompatibility Results [7.705334602362225]
We study systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing.
These systems often support communities disproportionately affected by systemic racial, gender, or other injustices.
We propose a framework for evaluating fairness in contextual resource allocation systems inspired by fairness metrics in machine learning.
arXiv Detail & Related papers (2022-12-04T02:30:58Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Machine Learning Fairness in Justice Systems: Base Rates, False
Positives, and False Negatives [0.0]
There is little guidance on how fairness might be achieved in practice.
This paper considers the consequences of having higher rates of false positives for one racial group and higher rates of false negatives for another racial group.
arXiv Detail & Related papers (2020-08-05T16:31:40Z) - Fast Fair Regression via Efficient Approximations of Mutual Information [0.0]
This paper introduces fast approximations of the independence, separation and sufficiency group fairness criteria for regression models.
It uses such approximations as regularisers to enforce fairness within a regularised risk minimisation framework.
Experiments in real-world datasets indicate that in spite of its superior computational efficiency our algorithm still displays state-of-the-art accuracy/fairness tradeoffs.
arXiv Detail & Related papers (2020-02-14T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.