Exploring Explainable AI in the Financial Sector: Perspectives of Banks
and Supervisory Authorities
- URL: http://arxiv.org/abs/2111.02244v1
- Date: Wed, 3 Nov 2021 14:11:37 GMT
- Title: Exploring Explainable AI in the Financial Sector: Perspectives of Banks
and Supervisory Authorities
- Authors: Ouren Kuiper, Martin van den Berg, Joost van den Burgt, Stefan Leijnen
- Abstract summary: The aim of this study was to investigate the perspectives of supervisory authorities and regulated entities regarding the application of xAI in the financial sector.
We found that for the investigated use cases a disparity exists between supervisory authorities and banks regarding the desired scope of explainability of AI systems.
We argue that the financial sector could benefit from clear differentiation between technical AI (model) ex-plainability requirements and explainability requirements of the broader AI system in relation to applicable laws and regulations.
- Score: 0.3670422696827526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable artificial intelligence (xAI) is seen as a solution to making AI
systems less of a black box. It is essential to ensure transparency, fairness,
and accountability, which are especially paramount in the financial sector. The
aim of this study was a preliminary investigation of the perspectives of
supervisory authorities and regulated entities regarding the application of xAI
in the fi-nancial sector. Three use cases (consumer credit, credit risk, and
anti-money laundering) were examined using semi-structured interviews at three
banks and two supervisory authorities in the Netherlands. We found that for the
investigated use cases a disparity exists between supervisory authorities and
banks regarding the desired scope of explainability of AI systems. We argue
that the financial sector could benefit from clear differentiation between
technical AI (model) ex-plainability requirements and explainability
requirements of the broader AI system in relation to applicable laws and
regulations.
Related papers
- In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Regulating Ai In Financial Services: Legal Frameworks And Compliance Challenges [0.0]
Article examines the evolving landscape of artificial intelligence (AI) regulation in financial services.
It highlights how AI-driven processes, from fraud detection to algorithmic trading, offer efficiency gains yet introduce significant risks.
The study compares regulatory approaches across major jurisdictions such as the European Union, United States, and United Kingdom.
arXiv Detail & Related papers (2025-03-17T14:29:09Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Unlocking the Black Box: Analysing the EU Artificial Intelligence Act's Framework for Explainability in AI [0.0]
The need for eXplainable AI (XAI) is evident in fields such as healthcare, credit scoring, policing and the criminal justice system.
At the EU level, the notion of explainability is one of the fundamental principles that underpin the AI Act.
This paper explores various approaches and techniques that promise to advance XAI, as well as the challenges of implementing the principle of explainability in AI governance and policies.
arXiv Detail & Related papers (2025-01-24T16:30:19Z) - Public vs Private Bodies: Who Should Run Advanced AI Evaluations and Audits? A Three-Step Logic Based on Case Studies of High-Risk Industries [0.5573180584719433]
This paper draws from nine such regimes to inform who should audit which parts of advanced AI.
The effective responsibility distribution between public and private auditors depends heavily on specific industry and audit conditions.
Public bodies' capacity should scale with the industry's risk level, size and market concentration.
arXiv Detail & Related papers (2024-07-30T14:25:08Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Auditing of AI: Legal, Ethical and Technical Approaches [0.0]
AI auditing is a rapidly growing field of research and practice.
Different approaches to AI auditing have different affordances and constraints.
The next step in the evolution of auditing as an AI governance mechanism should be the interlinking of these available approaches.
arXiv Detail & Related papers (2024-07-07T12:49:58Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - The AI Revolution: Opportunities and Challenges for the Finance Sector [12.486180180030964]
The application of AI in the financial sector is transforming the industry.
However, along with these benefits, AI also presents several challenges.
These include issues related to transparency, interpretability, fairness, accountability, and trustworthiness.
The use of AI in the financial sector further raises critical questions about data privacy and security.
Despite the global recognition of this need, there remains a lack of clear guidelines or legislation for AI use in finance.
arXiv Detail & Related papers (2023-08-31T08:30:09Z) - Explainability in AI Policies: A Critical Review of Communications,
Reports, Regulations, and Standards in the EU, US, and UK [1.5039745292757671]
We perform the first thematic and gap analysis of policies and standards on explainability in the EU, US, and UK.
We find that policies are often informed by coarse notions and requirements for explanations.
We propose recommendations on how to address explainability in regulations for AI systems.
arXiv Detail & Related papers (2023-04-20T07:53:07Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.