Machine Learning Fairness in Justice Systems: Base Rates, False
Positives, and False Negatives
- URL: http://arxiv.org/abs/2008.02214v1
- Date: Wed, 5 Aug 2020 16:31:40 GMT
- Title: Machine Learning Fairness in Justice Systems: Base Rates, False
Positives, and False Negatives
- Authors: Jesse Russell
- Abstract summary: There is little guidance on how fairness might be achieved in practice.
This paper considers the consequences of having higher rates of false positives for one racial group and higher rates of false negatives for another racial group.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning best practice statements have proliferated, but there is a
lack of consensus on what the standards should be. For fairness standards in
particular, there is little guidance on how fairness might be achieved in
practice. Specifically, fairness in errors (both false negatives and false
positives) can pose a problem of how to set weights, how to make unavoidable
tradeoffs, and how to judge models that present different kinds of errors
across racial groups. This paper considers the consequences of having higher
rates of false positives for one racial group and higher rates of false
negatives for another racial group. The paper examines how different errors in
justice settings can present problems for machine learning applications, the
limits of computation for resolving tradeoffs, and how solutions might have to
be crafted through courageous conversations with leadership, line workers,
stakeholders, and impacted communities.
Related papers
- DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Improving Robust Fairness via Balance Adversarial Training [51.67643171193376]
Adversarial training (AT) methods are effective against adversarial attacks, yet they introduce severe disparity of accuracy and robustness between different classes.
We propose Adversarial Training (BAT) to address the robust fairness problem.
arXiv Detail & Related papers (2022-09-15T14:44:48Z) - Is calibration a fairness requirement? An argument from the point of
view of moral philosophy and decision theory [0.0]
We argue that a violation of group calibration may be unfair in some cases, but not unfair in others.
This is in line with claims already advanced in the literature, that algorithmic fairness should be defined in a way that is sensitive to context.
arXiv Detail & Related papers (2022-05-11T14:03:33Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Blackbox Post-Processing for Multiclass Fairness [1.5305403478254664]
We consider modifying the predictions of a blackbox machine learning classifier in order to achieve fairness in a multiclass setting.
We explore when our approach produces both fair and accurate predictions through systematic synthetic experiments.
We find that overall, our approach produces minor drops in accuracy and enforces fairness when the number of individuals in the dataset is high.
arXiv Detail & Related papers (2022-01-12T13:21:20Z) - Parity-based Cumulative Fairness-aware Boosting [7.824964622317634]
Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.
We propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round.
Our experiments show that our approach can achieve parity in terms of statistical parity, equal opportunity, and disparate mistreatment.
arXiv Detail & Related papers (2022-01-04T14:16:36Z) - Fairness-aware Class Imbalanced Learning [57.45784950421179]
We evaluate long-tail learning methods for tweet sentiment and occupation classification.
We extend a margin-loss based approach with methods to enforce fairness.
arXiv Detail & Related papers (2021-09-21T22:16:30Z) - The Limits of Computation in Solving Equity Trade-Offs in Machine
Learning and Justice System Risk Assessment [0.0]
This paper explores how different ideas of racial equity in machine learning, in justice settings in particular, can present trade-offs that are difficult to solve computationally.
arXiv Detail & Related papers (2021-02-08T16:46:29Z) - Recovering from Biased Data: Can Fairness Constraints Improve Accuracy? [11.435833538081557]
Empirical Risk Minimization (ERM) may produce a classifier that not only is biased but also has suboptimal accuracy on the true data distribution.
We examine the ability of fairness-constrained ERM to correct this problem.
We also consider other recovery methods including reweighting the training data, Equalized Odds, and Demographic Parity.
arXiv Detail & Related papers (2019-12-02T22:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.