Increasing Fairness via Combination with Learning Guarantees
- URL: http://arxiv.org/abs/2301.10813v3
- Date: Wed, 25 Oct 2023 19:44:27 GMT
- Title: Increasing Fairness via Combination with Learning Guarantees
- Authors: Yijun Bian, Kun Zhang, Anqi Qiu, Nanguang Chen
- Abstract summary: We propose a fairness quality measure named discriminative risk to reflect both individual and group fairness aspects.
We also propose first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
- Score: 8.314000998551865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The concern about underlying discrimination hidden in machine learning (ML)
models is increasing, as ML systems have been widely applied in more and more
real-world scenarios and any discrimination hidden in them will directly affect
human life. Many techniques have been developed to enhance fairness including
commonly-used group fairness measures and several fairness-aware methods
combining ensemble learning. However, existing fairness measures can only focus
on one aspect -- either group or individual fairness, and the hard
compatibility among them indicates a possibility of remaining biases even if
one of them is satisfied. Moreover, existing mechanisms to boost fairness
usually present empirical results to show validity, yet few of them discuss
whether fairness can be boosted with certain theoretical guarantees. To address
these issues, we propose a fairness quality measure named discriminative risk
to reflect both individual and group fairness aspects. Furthermore, we
investigate the properties of the proposed measure and propose first- and
second-order oracle bounds to show that fairness can be boosted via ensemble
combination with theoretical learning guarantees. The analysis is suitable for
both binary and multi-class classification. A pruning method is also proposed
to utilise our proposed measure and comprehensive experiments are conducted to
evaluate the effectiveness of the proposed methods.
Related papers
- Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly [2.002741592555996]
Existing techniques for assessing the discrimination level of machine learning models include commonly used group and individual fairness measures.
We propose a "harmonic fairness measure via manifold (HFM)" based on distances between sets.
Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.
arXiv Detail & Related papers (2024-05-15T11:07:40Z) - Intersectional Two-sided Fairness in Recommendation [41.96733939002468]
We propose a novel approach called Inter-sectional Two-sided Fairness Recommendation (ITFR)
Our method utilizes a sharpness-aware loss to perceive disadvantaged groups, and then uses collaborative loss balance to develop consistent distinguishing abilities for different intersectional groups.
Our proposed approach effectively alleviates the intersectional two-sided unfairness and consistently outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2024-02-05T08:56:24Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - FaiREE: Fair Classification with Finite-Sample and Distribution-Free
Guarantee [40.10641140860374]
FaiREE is a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees.
FaiREE is shown to have favorable performance over state-of-the-art algorithms.
arXiv Detail & Related papers (2022-11-28T05:16:20Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Fair Federated Learning via Bounded Group Loss [37.72259706322158]
We propose a general framework for provably fair federated learning.
We extend the notion of Bounded Group Loss as a theoretically-grounded approach for group fairness.
We provide convergence guarantees for the method as well as fairness guarantees for the resulting solution.
arXiv Detail & Related papers (2022-03-18T23:11:54Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.