Black Loans Matter: Distributionally Robust Fairness for Fighting
Subgroup Discrimination
- URL: http://arxiv.org/abs/2012.01193v1
- Date: Fri, 27 Nov 2020 21:04:07 GMT
- Title: Black Loans Matter: Distributionally Robust Fairness for Fighting
Subgroup Discrimination
- Authors: Mark Weber, Mikhail Yurochkin, Sherif Botros, Vanio Markov
- Abstract summary: Algorithmic fairness in lending relies on group fairness metrics for monitoring statistical parity across protected groups.
This approach is vulnerable to subgroup discrimination by proxy, carrying significant risks of legal and reputational damage for lenders.
We motivate this problem against the backdrop of historical and residual racism in the United States polluting all available training data.
- Score: 23.820606347327686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic fairness in lending today relies on group fairness metrics for
monitoring statistical parity across protected groups. This approach is
vulnerable to subgroup discrimination by proxy, carrying significant risks of
legal and reputational damage for lenders and blatantly unfair outcomes for
borrowers. Practical challenges arise from the many possible combinations and
subsets of protected groups. We motivate this problem against the backdrop of
historical and residual racism in the United States polluting all available
training data and raising public sensitivity to algorithimic bias. We review
the current regulatory compliance protocols for fairness in lending and discuss
their limitations relative to the contributions state-of-the-art fairness
methods may afford. We propose a solution for addressing subgroup
discrimination, while adhering to existing group fairness requirements, from
recent developments in individual fairness methods and corresponding fair
metric learning algorithms.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
It is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
The confounding factors, which are non-protected variables but manifest systematic differences, can significantly affect fairness evaluation.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Robust Fair Clustering: A Novel Fairness Attack and Defense Framework [33.87395800206783]
We propose a novel black-box fairness attack against fair clustering algorithms.
We find that state-of-the-art models are highly susceptible to our attack as it can reduce their fairness performance significantly.
We also propose Consensus Fair Clustering (CFC), the first robust fair clustering approach.
arXiv Detail & Related papers (2022-10-04T23:00:20Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.