Towards Better Fairness-Utility Trade-off: A Comprehensive
Measurement-Based Reinforcement Learning Framework
- URL: http://arxiv.org/abs/2307.11379v1
- Date: Fri, 21 Jul 2023 06:34:41 GMT
- Title: Towards Better Fairness-Utility Trade-off: A Comprehensive
Measurement-Based Reinforcement Learning Framework
- Authors: Simiao Zhang, Jitao Bai, Menghong Guan, Yihao Huang, Yueling Zhang,
Jun Sun and Geguang Pu
- Abstract summary: How to ensure machine learning's fairness while maintaining its utility is a challenging but crucial issue.
We propose CFU (Comprehensive Fairness-Utility), a reinforcement learning-based framework, to efficiently improve the fairness-utility trade-off.
CFU outperforms all state-of-the-art techniques and has witnessed a 37.5% improvement on average.
- Score: 7.8940121707748245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is widely used to make decisions with societal impact such
as bank loan approving, criminal sentencing, and resume filtering. How to
ensure its fairness while maintaining utility is a challenging but crucial
issue. Fairness is a complex and context-dependent concept with over 70
different measurement metrics. Since existing regulations are often vague in
terms of which metric to use and different organizations may prefer different
fairness metrics, it is important to have means of improving fairness
comprehensively. Existing mitigation techniques often target at one specific
fairness metric and have limitations in improving multiple notions of fairness
simultaneously. In this work, we propose CFU (Comprehensive Fairness-Utility),
a reinforcement learning-based framework, to efficiently improve the
fairness-utility trade-off in machine learning classifiers. A comprehensive
measurement that can simultaneously consider multiple fairness notions as well
as utility is established, and new metrics are proposed based on an in-depth
analysis of the relationship between different fairness metrics. The reward
function of CFU is constructed with comprehensive measurement and new metrics.
We conduct extensive experiments to evaluate CFU on 6 tasks, 3 machine learning
models, and 15 fairness-utility measurements. The results demonstrate that CFU
can improve the classifier on multiple fairness metrics without sacrificing its
utility. It outperforms all state-of-the-art techniques and has witnessed a
37.5% improvement on average.
Related papers
- Federated Fairness Analytics: Quantifying Fairness in Federated Learning [2.9674793945631097]
Federated Learning (FL) is a privacy-enhancing technology for distributed ML.
FL inherits fairness challenges from classical ML and introduces new ones.
We propose Federated Fairness Analytics - a methodology for measuring fairness.
arXiv Detail & Related papers (2024-08-15T15:23:32Z) - Intrinsic Fairness-Accuracy Tradeoffs under Equalized Odds [8.471466670802817]
We study the tradeoff between fairness and accuracy under the statistical notion of equalized odds.
We present a new upper bound on the accuracy as a function of the fairness budget.
Our results show that achieving high accuracy subject to a low-bias could be fundamentally limited based on the statistical disparity across the groups.
arXiv Detail & Related papers (2024-05-12T23:15:21Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Mitigating Unfairness via Evolutionary Multi-objective Ensemble Learning [0.8563354084119061]
Optimising one or several fairness measures may sacrifice or deteriorate other measures.
A multi-objective evolutionary learning framework is used to simultaneously optimise several metrics.
Our proposed algorithm can provide decision-makers with better tradeoffs among accuracy and multiple fairness metrics.
arXiv Detail & Related papers (2022-10-30T06:34:10Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Cascaded Debiasing: Studying the Cumulative Effect of Multiple
Fairness-Enhancing Interventions [48.98659895355356]
This paper investigates the cumulative effect of multiple fairness enhancing interventions at different stages of the machine learning (ML) pipeline.
Applying multiple interventions results in better fairness and lower utility than individual interventions on aggregate.
On the downside, fairness-enhancing interventions can negatively impact different population groups, especially the privileged group.
arXiv Detail & Related papers (2022-02-08T09:20:58Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.