Increasing Fairness via Combination with Learning Guarantees
- URL: http://arxiv.org/abs/2301.10813v3
- Date: Wed, 25 Oct 2023 19:44:27 GMT
- Title: Increasing Fairness via Combination with Learning Guarantees
- Authors: Yijun Bian, Kun Zhang, Anqi Qiu, Nanguang Chen
- Abstract summary: We propose a fairness quality measure named discriminative risk to reflect both individual and group fairness aspects.
We also propose first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
- Score: 8.314000998551865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The concern about underlying discrimination hidden in machine learning (ML)
models is increasing, as ML systems have been widely applied in more and more
real-world scenarios and any discrimination hidden in them will directly affect
human life. Many techniques have been developed to enhance fairness including
commonly-used group fairness measures and several fairness-aware methods
combining ensemble learning. However, existing fairness measures can only focus
on one aspect -- either group or individual fairness, and the hard
compatibility among them indicates a possibility of remaining biases even if
one of them is satisfied. Moreover, existing mechanisms to boost fairness
usually present empirical results to show validity, yet few of them discuss
whether fairness can be boosted with certain theoretical guarantees. To address
these issues, we propose a fairness quality measure named discriminative risk
to reflect both individual and group fairness aspects. Furthermore, we
investigate the properties of the proposed measure and propose first- and
second-order oracle bounds to show that fairness can be boosted via ensemble
combination with theoretical learning guarantees. The analysis is suitable for
both binary and multi-class classification. A pruning method is also proposed
to utilise our proposed measure and comprehensive experiments are conducted to
evaluate the effectiveness of the proposed methods.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Intersectional Two-sided Fairness in Recommendation [41.96733939002468]
We propose a novel approach called Inter-sectional Two-sided Fairness Recommendation (ITFR)
Our method utilizes a sharpness-aware loss to perceive disadvantaged groups, and then uses collaborative loss balance to develop consistent distinguishing abilities for different intersectional groups.
Our proposed approach effectively alleviates the intersectional two-sided unfairness and consistently outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-05T08:56:24Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Fair Federated Learning via Bounded Group Loss [37.72259706322158]
We propose a general framework for provably fair federated learning.
We extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.
We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution.
arXiv Detail & Related papers (2022-03-18T23:11:54Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.