Towards Responsible AI for Financial Transactions
- URL: http://arxiv.org/abs/2206.02419v1
- Date: Mon, 6 Jun 2022 08:29:47 GMT
- Title: Towards Responsible AI for Financial Transactions
- Authors: Charl Maree and Jan Erik Modal and Christian W. Omlin
- Abstract summary: We provide an explanation for a deep neural network that is trained on a mixture of numerical, categorical and textual inputs for financial transaction classification.
We then test the robustness of the model by exposing it to a targeted evasion attack.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The application of AI in finance is increasingly dependent on the principles
of responsible AI. These principles - explainability, fairness, privacy,
accountability, transparency and soundness form the basis for trust in future
AI systems. In this study, we address the first principle by providing an
explanation for a deep neural network that is trained on a mixture of
numerical, categorical and textual inputs for financial transaction
classification. The explanation is achieved through (1) a feature importance
analysis using Shapley additive explanations (SHAP) and (2) a hybrid approach
of text clustering and decision tree classifiers. We then test the robustness
of the model by exposing it to a targeted evasion attack, leveraging the
knowledge we gained about the model through the extracted explanation.
Related papers
- Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review [12.38351931894004]
We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
arXiv Detail & Related papers (2024-02-08T09:08:44Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - Collective eXplainable AI: Explaining Cooperative Strategies and Agent
Contribution in Multiagent Reinforcement Learning with Shapley Values [68.8204255655161]
This study proposes a novel approach to explain cooperative strategies in multiagent RL using Shapley values.
Results could have implications for non-discriminatory decision making, ethical and responsible AI-derived decisions or policy making under fairness constraints.
arXiv Detail & Related papers (2021-10-04T10:28:57Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explaining Black-Box Algorithms Using Probabilistic Contrastive
Counterfactuals [7.727206277914709]
We propose a principled causality-based approach for explaining black-box decision-making systems.
We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm.
We show how such counterfactuals can provide actionable recourse for individuals negatively affected by the algorithm's decision.
arXiv Detail & Related papers (2021-03-22T16:20:21Z) - Generating Plausible Counterfactual Explanations for Deep Transformers
in Financial Text Classification [33.026285180536036]
This paper proposes a novel methodology for producing plausible counterfactual explanations.
It also explores the regularization benefits of adversarial training on language models in the domain of FinTech.
arXiv Detail & Related papers (2020-10-23T16:29:26Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.