Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making
- URL: http://arxiv.org/abs/2401.08691v1
- Date: Sat, 13 Jan 2024 14:07:09 GMT
- Title: Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making
- Authors: Alessandro Castelnovo
- Abstract summary: "Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
- Score: 69.44075077934914
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In an era characterized by the pervasive integration of artificial
intelligence into decision-making processes across diverse industries, the
demand for trust has never been more pronounced. This thesis embarks on a
comprehensive exploration of bias and fairness, with a particular emphasis on
their ramifications within the banking sector, where AI-driven decisions bear
substantial societal consequences. In this context, the seamless integration of
fairness, explainability, and human oversight is of utmost importance,
culminating in the establishment of what is commonly referred to as
"Responsible AI". This emphasizes the critical nature of addressing biases
within the development of a corporate culture that aligns seamlessly with both
AI regulations and universal human rights standards, particularly in the realm
of automated decision-making systems. Nowadays, embedding ethical principles
into the development, training, and deployment of AI models is crucial for
compliance with forthcoming European regulations and for promoting societal
good. This thesis is structured around three fundamental pillars: understanding
bias, mitigating bias, and accounting for bias. These contributions are
validated through their practical application in real-world scenarios, in
collaboration with Intesa Sanpaolo. This collaborative effort not only
contributes to our understanding of fairness but also provides practical tools
for the responsible implementation of AI-based decision-making systems. In line
with open-source principles, we have released Bias On Demand and FairView as
accessible Python packages, further promoting progress in the field of AI
fairness.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - A Review of the Ethics of Artificial Intelligence and its Applications
in the United States [0.0]
The paper highlights the impact AI has in every sector of the US economy and the resultant effect on entities spanning businesses, government, academia, and civil society.
Our discussion explores eleven fundamental 'ethical principles' structured as overarching themes.
These encompass Transparency, Justice, Fairness, Equity, Non- Maleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity, Sustainability, and Solidarity.
arXiv Detail & Related papers (2023-10-09T14:29:00Z) - Connecting the Dots in Trustworthy Artificial Intelligence: From AI
Principles, Ethics, and Key Requirements to Responsible AI Systems and
Regulation [22.921683578188645]
We argue that attaining truly trustworthy AI concerns the trustworthiness of all processes and actors that are part of the system's life cycle.
A more holistic vision contemplates four essential axes: the global principles for ethical use and development of AI-based systems, a philosophical take on AI ethics, and a risk-based approach to AI regulation.
Our multidisciplinary vision of trustworthy AI culminates in a debate on the diverging views published lately about the future of AI.
arXiv Detail & Related papers (2023-05-02T09:49:53Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Toward a Theory of Justice for Artificial Intelligence [2.28438857884398]
It holds that the basic structure of society should be understood as a composite of socio-technical systems.
As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts.
arXiv Detail & Related papers (2021-10-27T13:23:38Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - An Ecosystem Approach to Ethical AI and Data Use: Experimental
Reflections [0.0]
This paper offers a methodology to identify the needs of AI practitioners when it comes to confronting and resolving ethical challenges.
We offer a grassroots approach to operational ethics based on dialog and mutualised responsibility.
arXiv Detail & Related papers (2020-12-27T07:41:26Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.