The Gendered Algorithm: Navigating Financial Inclusion & Equity in AI-facilitated Access to Credit
- URL: http://arxiv.org/abs/2504.07312v2
- Date: Fri, 11 Apr 2025 21:16:09 GMT
- Title: The Gendered Algorithm: Navigating Financial Inclusion & Equity in AI-facilitated Access to Credit
- Authors: Genevieve Smith,
- Abstract summary: A growing trend in financial technology (fintech) is the use of mobile phone data and machine learning (ML) to provide credit scores.<n>This paper draws on interview data with leaders, investors, and data scientists at companies developing ML-based alternative lending apps.<n>It examines how the underlying logics, design choices, and management decisions by developers embed or challenge gender biases.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A growing trend in financial technology (fintech) is the use of mobile phone data and machine learning (ML) to provide credit scores- and subsequently, opportunities to access loans- to groups left out of traditional banking. This paper draws on interview data with leaders, investors, and data scientists at fintech companies developing ML-based alternative lending apps in low- and middle-income countries to explore financial inclusion and gender implications. More specifically, it examines how the underlying logics, design choices, and management decisions of ML-based alternative lending tools by fintechs embed or challenge gender biases, and consequently influence gender equity in access to finance. Findings reveal developers follow 'gender blind' approaches, grounded in beliefs that ML is objective and data reflects the truth. This leads to a lack of grappling with the ways data, features for creditworthiness, and access to apps are gendered. Overall, tools increase access to finance, but not gender equitably: Interviewees report less women access loans and receive lower amounts than men, despite being better repayers. Fintechs identify demand- and supply-side reasons for gender differences, but frame them as outside their responsibility. However, that women are observed as better repayers reveals a market inefficiency and potential discriminatory effect, further linked to profit optimization objectives. This research introduces the concept of encoded gender norms, whereby without explicit attention to the gendered nature of data and algorithmic design, AI tools reproduce existing inequalities. In doing so, they reinforce gender norms as self-fulfilling prophecies. The idea that AI is inherently objective and, when left alone, 'fair', is seductive and misleading. In reality, algorithms reflect the perspectives, priorities, and values of the people and institutions that design them.
Related papers
- Deep Learning Approaches for Anti-Money Laundering on Mobile Transactions: Review, Framework, and Directions [51.43521977132062]
Money laundering is a financial crime that obscures the origin of illicit funds.
The proliferation of mobile payment platforms and smart IoT devices has significantly complicated anti-money laundering investigations.
This paper conducts a comprehensive review of deep learning solutions and the challenges associated with their use in AML.
arXiv Detail & Related papers (2025-03-13T05:19:44Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)<n>Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.<n>To eliminate the gender bias in these models, we find that fine-tuning-based debiasing methods achieve the best trade-off between debiasing and retaining performance.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - Gender Bias of LLM in Economics: An Existentialism Perspective [1.024113475677323]
This paper investigates gender bias in large language models (LLMs)
LLMs reinforce gender stereotypes even without explicit gender markers.
We argue that bias in LLMs is not an unintended flaw but a systematic result of their rational processing.
arXiv Detail & Related papers (2024-10-14T01:42:01Z) - Gender Biases in LLMs: Higher intelligence in LLM does not necessarily solve gender bias and stereotyping [0.0]
Large Language Models (LLMs) are finding applications in all aspects of life, but their susceptibility to biases, particularly gender stereotyping, raises ethical concerns.<n>This study introduces a novel methodology, a persona-based framework, and a unisex name methodology to investigate whether higher-intelligence LLMs reduce such biases.
arXiv Detail & Related papers (2024-09-30T05:22:54Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.<n>GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Fair Models in Credit: Intersectional Discrimination and the
Amplification of Inequity [5.333582981327497]
The authors demonstrate the impact of such algorithmic bias in the microfinance context.
We find that in addition to legally protected characteristics, sensitive attributes such as single parent status and number of children can result in imbalanced harm.
arXiv Detail & Related papers (2023-08-01T10:34:26Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Gender Animus Can Still Exist Under Favorable Disparate Impact: a
Cautionary Tale from Online P2P Lending [1.4731169524644787]
This paper investigates gender discrimination and its underlying drivers on a prominent Chinese online peer-to-peer (P2P) lending platform.
We measure a broadened discrimination notion called disparate impact (DI), which encompasses any disparity in the loan's funding rate that does not commensurate with the actual return rate.
We also identify the overall female favoritism can be explained by one specific discrimination driver, rational statistical discrimination.
arXiv Detail & Related papers (2022-10-14T14:38:39Z) - Gender Stereotype Reinforcement: Measuring the Gender Bias Conveyed by
Ranking Algorithms [68.85295025020942]
We propose the Gender Stereotype Reinforcement (GSR) measure, which quantifies the tendency of a Search Engines to support gender stereotypes.
GSR is the first specifically tailored measure for Information Retrieval, capable of quantifying representational harms.
arXiv Detail & Related papers (2020-09-02T20:45:04Z) - Crowd, Lending, Machine, and Bias [10.440847890315293]
We show that a reasonably sophisticated machine learning algorithm predicts listing default probability more accurately than crowd investors.
When machine prediction is used to select loans, it leads to a higher rate of return for investors and more funding opportunities for borrowers.
We propose a general and effective "debasing" method that can be applied to any prediction focused ML applications.
arXiv Detail & Related papers (2020-07-20T01:26:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.