Explainability in AI Based Applications: A Framework for Comparing Different Techniques
- URL: http://arxiv.org/abs/2410.20873v1
- Date: Mon, 28 Oct 2024 09:45:34 GMT
- Title: Explainability in AI Based Applications: A Framework for Comparing Different Techniques
- Authors: Arne Grobrugge, Nidhi Mishra, Johannes Jakubik, Gerhard Satzger,
- Abstract summary: In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
- Score: 2.5874041837241304
- License:
- Abstract: The integration of artificial intelligence into business processes has significantly enhanced decision-making capabilities across various industries such as finance, healthcare, and retail. However, explaining the decisions made by these AI systems poses a significant challenge due to the opaque nature of recent deep learning models, which typically function as black boxes. To address this opacity, a multitude of explainability techniques have emerged. However, in practical business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy. This paper addresses the practical need of understanding differences in the output of explainability techniques by proposing a novel method for the assessment of the agreement of different explainability techniques. Based on our proposed methods, we provide a comprehensive comparative analysis of six leading explainability techniques to help guiding the selection of such techniques in practice. Our proposed general-purpose method is evaluated on top of one of the most popular deep learning architectures, the Vision Transformer model, which is frequently employed in business applications. Notably, we propose a novel metric to measure the agreement of explainability techniques that can be interpreted visually. By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
Related papers
- Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning [59.98430756337374]
Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks.
Our work introduces a novel technique aimed at cultivating a deeper understanding of the training problems at hand.
We propose reflective augmentation, a method that embeds problem reflection into each training instance.
arXiv Detail & Related papers (2024-06-17T19:42:22Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review [1.3812010983144802]
This paper presents a systematic review on the explainability and interpretability of machine learning (ML) models within the context of predictive process mining.
We provide a comprehensive overview of the current methodologies and their applications across various application domains.
Our findings aim to equip researchers and practitioners with a deeper understanding of how to develop and implement more trustworthy, transparent, and effective intelligent systems for process analytics.
arXiv Detail & Related papers (2023-12-29T12:43:43Z) - A Comprehensive Review on Financial Explainable AI [29.229196780505532]
We provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance.
We categorize the collection of explainable AI methods according to their corresponding characteristics.
We review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.
arXiv Detail & Related papers (2023-09-21T10:30:49Z) - Helpful, Misleading or Confusing: How Humans Perceive Fundamental
Building Blocks of Artificial Intelligence Explanations [11.667611038005552]
We take a step back from sophisticated predictive algorithms and look into explainability of simple decision-making models.
We aim to assess how people perceive comprehensibility of their different representations.
This allows us to capture how diverse stakeholders judge intelligibility of fundamental concepts that more elaborate artificial intelligence explanations are built from.
arXiv Detail & Related papers (2023-03-02T03:15:35Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - SIDU: Similarity Difference and Uniqueness Method for Explainable AI [21.94600656231124]
This paper presents a novel visual explanation method for deep learning networks in the form of a saliency map.
The proposed method shows quite promising visual explanations that can gain greater trust of human expert.
arXiv Detail & Related papers (2020-06-04T20:33:40Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.