On the Maximal Local Disparity of Fairness-Aware Classifiers
- URL: http://arxiv.org/abs/2406.03255v1
- Date: Wed, 5 Jun 2024 13:35:48 GMT
- Title: On the Maximal Local Disparity of Fairness-Aware Classifiers
- Authors: Jinqiu Jin, Haoxuan Li, Fuli Feng,
- Abstract summary: We propose a novel fairness metric called Maximal Cumulative ratio Disparity along varying Predictions' neighborhood (MCDP)
To accurately and efficiently calculate the MCDP, we develop a provably exact and an approximate calculation algorithm that greatly reduces the computational complexity with low estimation error.
- Score: 35.98015221840018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness has become a crucial aspect in the development of trustworthy machine learning algorithms. Current fairness metrics to measure the violation of demographic parity have the following drawbacks: (i) the average difference of model predictions on two groups cannot reflect their distribution disparity, and (ii) the overall calculation along all possible predictions conceals the extreme local disparity at or around certain predictions. In this work, we propose a novel fairness metric called Maximal Cumulative ratio Disparity along varying Predictions' neighborhood (MCDP), for measuring the maximal local disparity of the fairness-aware classifiers. To accurately and efficiently calculate the MCDP, we develop a provably exact and an approximate calculation algorithm that greatly reduces the computational complexity with low estimation error. We further propose a bi-level optimization algorithm using a differentiable approximation of the MCDP for improving the algorithmic fairness. Extensive experiments on both tabular and image datasets validate that our fair training algorithm can achieve superior fairness-accuracy trade-offs.
Related papers
- Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly [2.002741592555996]
Existing techniques for assessing the discrimination level of machine learning models include commonly used group and individual fairness measures.
We propose a "harmonic fairness measure via manifold (HFM)" based on distances between sets.
Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.
arXiv Detail & Related papers (2024-05-15T11:07:40Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Metrizing Fairness [5.323439381187456]
We study supervised learning problems that have significant effects on individuals from two demographic groups.
We seek predictors that are fair with respect to a group fairness criterion such as statistical parity (SP)
In this paper, we identify conditions under which hard SP constraints are guaranteed to improve predictive accuracy.
arXiv Detail & Related papers (2022-05-30T12:28:10Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy [7.215903549622416]
We propose a mixed integer optimization framework for learning optimal classification trees.
We benchmark our method against state-of-the-art approaches for fair classification on popular datasets.
Our method consistently finds decisions with almost full parity, while other methods rarely do.
arXiv Detail & Related papers (2022-01-24T19:47:10Z) - A Unified Framework for Multi-distribution Density Ratio Estimation [101.67420298343512]
Binary density ratio estimation (DRE) provides the foundation for many state-of-the-art machine learning algorithms.
We develop a general framework from the perspective of Bregman minimization divergence.
We show that our framework leads to methods that strictly generalize their counterparts in binary DRE.
arXiv Detail & Related papers (2021-12-07T01:23:20Z) - The Sharpe predictor for fairness in machine learning [0.0]
In machine learning applications, unfair predictions may discriminate against a minority group.
Most existing approaches for fair machine learning (FML) treat fairness as a constraint or a penalization term in the optimization of a ML model.
We introduce a new paradigm for FML based on Multi-Objective Optimization (SMOO), where accuracy and fairness metrics stand as conflicting objectives to be optimized simultaneously.
The Sharpe predictor for FML provides the highest prediction return (accuracy) per unit of prediction risk (unfairness).
arXiv Detail & Related papers (2021-08-13T22:22:34Z) - Understanding and Mitigating Accuracy Disparity in Regression [34.63275666745179]
We study the accuracy disparity problem in regression.
We propose an error decomposition theorem, which decomposes the accuracy disparity into the distance between marginal label distributions.
We then propose an algorithm to reduce this disparity, and analyze its game-theoretic optima of the proposed objective functions.
arXiv Detail & Related papers (2021-02-24T01:24:50Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.