Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework
- URL: http://arxiv.org/abs/2408.14552v1
- Date: Mon, 26 Aug 2024 18:10:07 GMT
- Title: Aiding Humans in Financial Fraud Decision Making: Toward an XAI-Visualization Framework
- Authors: Angelos Chatzimparmpas, Evanthia Dimara,
- Abstract summary: Financial fraud investigators face the challenge of manually synthesizing vast amounts of unstructured information.
Current Visual Analytics systems primarily support isolated aspects of this process.
We propose a framework where the VA system supports decision makers throughout all stages of financial fraud investigation.
- Score: 6.040452803295326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI prevails in financial fraud detection and decision making. Yet, due to concerns about biased automated decision making or profiling, regulations mandate that final decisions are made by humans. Financial fraud investigators face the challenge of manually synthesizing vast amounts of unstructured information, including AI alerts, transaction histories, social media insights, and governmental laws. Current Visual Analytics (VA) systems primarily support isolated aspects of this process, such as explaining binary AI alerts and visualizing transaction patterns, thus adding yet another layer of information to the overall complexity. In this work, we propose a framework where the VA system supports decision makers throughout all stages of financial fraud investigation, including data collection, information synthesis, and human criteria iteration. We illustrate how VA can claim a central role in AI-aided decision making, ensuring that human judgment remains in control while minimizing potential biases and labor-intensive tasks.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Explainable Automated Machine Learning for Credit Decisions: Enhancing
Human Artificial Intelligence Collaboration in Financial Engineering [0.0]
This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering.
The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring.
The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions.
arXiv Detail & Related papers (2024-02-06T08:47:16Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - The Conflict Between Explainable and Accountable Decision-Making
Algorithms [10.64167691614925]
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired.
XAI initiative aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability.
This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems.
arXiv Detail & Related papers (2022-05-11T07:19:28Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - AI Assurance using Causal Inference: Application to Public Policy [0.0]
Most AI approaches can only be represented as "black boxes" and suffer from the lack of transparency.
It is crucial not only to develop effective and robust AI systems, but to make sure their internal processes are explainable and fair.
arXiv Detail & Related papers (2021-12-01T16:03:06Z) - Reviewable Automated Decision-Making: A Framework for Accountable
Algorithmic Systems [1.7403133838762448]
This paper introduces reviewability as a framework for improving the accountability of automated and algorithmic decision-making (ADM)
We draw on an understanding of ADM as a socio-technical process involving both human and technical elements, beginning before a decision is made and extending beyond the decision itself.
We argue that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
arXiv Detail & Related papers (2021-01-26T18:15:34Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.