Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic
Fairness Research with U.S. Fair Lending Regulation
- URL: http://arxiv.org/abs/2210.02516v1
- Date: Wed, 5 Oct 2022 19:23:29 GMT
- Title: Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic
Fairness Research with U.S. Fair Lending Regulation
- Authors: I. Elizabeth Kumar, Keegan E. Hines, John P. Dickerson
- Abstract summary: Credit is an essential component of financial wellbeing in America.
Machine learning algorithms are increasingly being used to determine access to credit.
Research has shown that machine learning can encode many different versions of "unfairness"
- Score: 27.517669481719388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Credit is an essential component of financial wellbeing in America, and
unequal access to it is a large factor in the economic disparities between
demographic groups that exist today. Today, machine learning algorithms,
sometimes trained on alternative data, are increasingly being used to determine
access to credit, yet research has shown that machine learning can encode many
different versions of "unfairness," thus raising the concern that banks and
other financial institutions could -- potentially unwittingly -- engage in
illegal discrimination through the use of this technology. In the US, there are
laws in place to make sure discrimination does not happen in lending and
agencies charged with enforcing them. However, conversations around fair credit
models in computer science and in policy are often misaligned: fair machine
learning research often lacks legal and practical considerations specific to
existing fair lending policy, and regulators have yet to issue new guidance on
how, if at all, credit risk models should be utilizing practices and techniques
from the research community. This paper aims to better align these sides of the
conversation. We describe the current state of credit discrimination regulation
in the United States, contextualize results from fair ML research to identify
the specific fairness concerns raised by the use of machine learning in
lending, and discuss regulatory opportunities to address these concerns.
Related papers
- Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Fair Models in Credit: Intersectional Discrimination and the
Amplification of Inequity [5.333582981327497]
The authors demonstrate the impact of such algorithmic bias in the microfinance context.
We find that in addition to legally protected characteristics, sensitive attributes such as single parent status and number of children can result in imbalanced harm.
arXiv Detail & Related papers (2023-08-01T10:34:26Z) - Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law [2.959308758321417]
We present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between three fairness criteria.
We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector.
arXiv Detail & Related papers (2022-12-01T12:47:54Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Super-App Behavioral Patterns in Credit Risk Models: Financial,
Statistical and Regulatory Implications [110.54266632357673]
We present the impact of alternative data that originates from an app-based marketplace, in contrast to traditional bureau data, upon credit scoring models.
Our results, validated across two countries, show that these new sources of data are particularly useful for predicting financial behavior in low-wealth and young individuals.
arXiv Detail & Related papers (2020-05-09T01:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.