Gender Animus Can Still Exist Under Favorable Disparate Impact: a
Cautionary Tale from Online P2P Lending
- URL: http://arxiv.org/abs/2210.07864v3
- Date: Sun, 14 May 2023 07:31:06 GMT
- Title: Gender Animus Can Still Exist Under Favorable Disparate Impact: a
Cautionary Tale from Online P2P Lending
- Authors: Xudong Shen, Tianhui Tan, Tuan Q. Phan, Jussi Keppo
- Abstract summary: This paper investigates gender discrimination and its underlying drivers on a prominent Chinese online peer-to-peer (P2P) lending platform.
We measure a broadened discrimination notion called disparate impact (DI), which encompasses any disparity in the loan's funding rate that does not commensurate with the actual return rate.
We also identify the overall female favoritism can be explained by one specific discrimination driver, rational statistical discrimination.
- Score: 1.4731169524644787
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper investigates gender discrimination and its underlying drivers on a
prominent Chinese online peer-to-peer (P2P) lending platform. While existing
studies on P2P lending focus on disparate treatment (DT), DT narrowly
recognizes direct discrimination and overlooks indirect and proxy
discrimination, providing an incomplete picture. In this work, we measure a
broadened discrimination notion called disparate impact (DI), which encompasses
any disparity in the loan's funding rate that does not commensurate with the
actual return rate. We develop a two-stage predictor substitution approach to
estimate DI from observational data. Our findings reveal (i) female borrowers,
given identical actual return rates, are 3.97% more likely to receive funding,
(ii) at least 37.1% of this DI favoring female is indirect or proxy
discrimination, and (iii) DT indeed underestimates the overall female
favoritism by 44.6%. However, we also identify the overall female favoritism
can be explained by one specific discrimination driver, rational statistical
discrimination, wherein investors accurately predict the expected return rate
from imperfect observations. Furthermore, female borrowers still require 2%
higher expected return rate to secure funding, indicating another driver
taste-based discrimination co-exists and is against female. These results
altogether tell a cautionary tale: on one hand, P2P lending provides a valuable
alternative credit market where the affirmative action to support female
naturally emerges from the rational crowd; on the other hand, while the overall
discrimination effect (both in terms of DI or DT) favors female, concerning
taste-based discrimination can persist and can be obscured by other co-existing
discrimination drivers, such as statistical discrimination.
Related papers
- Gender Bias and Property Taxes [50.18156030818883]
We analyze records of more than 100,000 property tax appeal hearings and more than 2.7 years of associated audio recordings.
Female appellants fare systematically worse than male appellants in their hearings.
Our results are consistent with the idea that gender biases are driven, at least in part, by unvoiced beliefs and perceptions on the part of ARB panelists.
arXiv Detail & Related papers (2024-12-17T07:14:23Z) - Privacy-Preserving Orthogonal Aggregation for Guaranteeing Gender Fairness in Federated Recommendation [18.123459468576648]
We study whether federated recommendation systems can achieve group fairness under stringent privacy constraints.
We propose Privacy-Preserving Orthogonal Aggregation (PPOA), which employs the secure aggregation scheme and quantization technique.
Experimental results show PPOA enhances recommendation effectiveness for both females and males by up to 8.25% and 6.36%, respectively.
arXiv Detail & Related papers (2024-11-29T13:12:11Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Evaluating Gender, Racial, and Age Biases in Large Language Models: A Comparative Analysis of Occupational and Crime Scenarios [0.0]
This paper examines bias in Large Language Models (LLMs)
Findings reveal that LLMs often depict female characters more frequently than male ones in various occupations.
Efforts to reduce gender and racial bias often lead to outcomes that may over-index one sub-class.
arXiv Detail & Related papers (2024-09-22T20:21:20Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Marrying Fairness and Explainability in Supervised Learning [0.0]
We formalize direct discrimination as a direct causal effect of the protected attributes on the decisions.
We find that state-of-the-art fair learning methods can induce discrimination via association or reverse discrimination.
We propose to nullify the influence of the protected attribute on the output of the system, while preserving the influence of remaining features.
arXiv Detail & Related papers (2022-04-06T17:26:58Z) - Context-Aware Discrimination Detection in Job Vacancies using
Computational Language Models [0.0]
Discriminatory job vacancies are disapproved worldwide, but remain persistent.
Discriminatory job vacancies can be explicit by directly referring to demographic memberships of candidates.
implicit forms of discrimination are also present that may not always be illegal but still influence the diversity of applicants.
arXiv Detail & Related papers (2022-02-02T09:25:08Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Discrimination of POVMs with rank-one effects [62.997667081978825]
This work provides an insight into the problem of discrimination of positive operator valued measures with rank-one effects.
We compare two possible discrimination schemes: the parallel and adaptive ones.
We provide an explicit algorithm which allows us to find this adaptive scheme.
arXiv Detail & Related papers (2020-02-13T11:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.