Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach
- URL: http://arxiv.org/abs/2008.01132v3
- Date: Fri, 18 Mar 2022 04:34:04 GMT
- Title: Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach
- Authors: Suyun Liu and Luis Nunes Vicente
- Abstract summary: In the application of machine learning to real-life decision-making systems, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss.
In this paper, we introduce a new approach to handle fairness by formulating a multi-objective optimization problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the application of machine learning to real-life decision-making systems,
e.g., credit scoring and criminal justice, the prediction outcomes might
discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a
constraint or a penalization term in the minimization of the prediction loss,
which ultimately limits the information given to decision-makers. In this
paper, we introduce a new approach to handle fairness by formulating a
stochastic multi-objective optimization problem for which the corresponding
Pareto fronts uniquely and comprehensively define the accuracy-fairness
trade-offs. We have then applied a stochastic approximation-type method to
efficiently obtain well-spread and accurate Pareto fronts, and by doing so we
can handle training data arriving in a streaming way.
Related papers
- Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Beyond Expectations: Learning with Stochastic Dominance Made Practical [88.06211893690964]
dominance models risk-averse preferences for decision making with uncertain outcomes.
Despite theoretically appealing, the application of dominance in machine learning has been scarce.
We first generalize the dominance concept to enable feasible comparisons between any arbitrary pair of random variables.
We then develop a simple and efficient approach for finding the optimal solution in terms of dominance.
arXiv Detail & Related papers (2024-02-05T03:21:23Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Stochastic Methods for AUC Optimization subject to AUC-based Fairness
Constraints [51.12047280149546]
A direct approach for obtaining a fair predictive model is to train the model through optimizing its prediction performance subject to fairness constraints.
We formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
We demonstrate the effectiveness of our approach on real-world data under different fairness metrics.
arXiv Detail & Related papers (2022-12-23T22:29:08Z) - Fair Infinitesimal Jackknife: Mitigating the Influence of Biased
Training Data Points Without Refitting [41.96570350954332]
We propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points.
We find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric.
arXiv Detail & Related papers (2022-12-13T18:36:19Z) - The Sharpe predictor for fairness in machine learning [0.0]
In machine learning applications, unfair predictions may discriminate against a minority group.
Most existing approaches for fair machine learning (FML) treat fairness as a constraint or a penalization term in the optimization of a ML model.
We introduce a new paradigm for FML based on Multi-Objective Optimization (SMOO), where accuracy and fairness metrics stand as conflicting objectives to be optimized simultaneously.
The Sharpe predictor for FML provides the highest prediction return (accuracy) per unit of prediction risk (unfairness).
arXiv Detail & Related papers (2021-08-13T22:22:34Z) - All of the Fairness for Edge Prediction with Optimal Transport [11.51786288978429]
We study the problem of fairness for the task of edge prediction in graphs.
We propose an embedding-agnostic repairing procedure for the adjacency matrix of an arbitrary graph with a trade-off between the group and individual fairness.
arXiv Detail & Related papers (2020-10-30T15:33:13Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.