Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
Ground Truth Explanations Datasets
- URL: http://arxiv.org/abs/2311.01961v1
- Date: Fri, 3 Nov 2023 14:57:24 GMT
- Title: Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
Ground Truth Explanations Datasets
- Authors: M. Mir\'o-Nicolau, A. Jaume-i-Cap\'o, G. Moy\`a-Alcover
- Abstract summary: XAI methods based on the backpropagation of output information to input yield higher accuracy and reliability.
Backpropagation method tends to generate more noisy saliency maps.
Findings have significant implications for the advancement of XAI methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The evaluation of the fidelity of eXplainable Artificial Intelligence (XAI)
methods to their underlying models is a challenging task, primarily due to the
absence of a ground truth for explanations. However, assessing fidelity is a
necessary step for ensuring a correct XAI methodology. In this study, we
conduct a fair and objective comparison of the current state-of-the-art XAI
methods by introducing three novel image datasets with reliable ground truth
for explanations. The primary objective of this comparison is to identify
methods with low fidelity and eliminate them from further research, thereby
promoting the development of more trustworthy and effective XAI techniques. Our
results demonstrate that XAI methods based on the backpropagation of output
information to input yield higher accuracy and reliability compared to methods
relying on sensitivity analysis or Class Activation Maps (CAM). However, the
backpropagation method tends to generate more noisy saliency maps. These
findings have significant implications for the advancement of XAI methods,
enabling the elimination of erroneous explanations and fostering the
development of more robust and reliable XAI.
Related papers
- F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI [15.314388210699443]
Fine-tuned Fidelity F-Fidelity is a robust evaluation framework for XAI.
We show that F-Fidelity significantly improves upon prior evaluation metrics in recovering the ground-truth ranking of explainers.
We also show that given a faithful explainer, F-Fidelity metric can be used to compute the sparsity of influential input components.
arXiv Detail & Related papers (2024-10-03T20:23:06Z) - Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration [74.09687562334682]
We introduce a novel training data attribution method called Debias and Denoise Attribution (DDA)
Our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%.
DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.
arXiv Detail & Related papers (2024-10-02T07:14:26Z) - Robustness of Explainable Artificial Intelligence in Industrial Process Modelling [43.388607981317016]
We evaluate current XAI methods by scoring them based on ground truth simulations and sensitivity analysis.
We show the differences between XAI methods in their ability to correctly predict the true sensitivity of the modeled industrial process.
arXiv Detail & Related papers (2024-07-12T09:46:26Z) - A Generalization Theory of Cross-Modality Distillation with Contrastive Learning [49.35244441141323]
Cross-modality distillation arises as an important topic for data modalities containing limited knowledge.
We formulate a general framework of cross-modality contrastive distillation (CMCD), built upon contrastive learning.
Our algorithm outperforms existing algorithms consistently by a margin of 2-3% across diverse modalities and tasks.
arXiv Detail & Related papers (2024-05-06T11:05:13Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - REX: Rapid Exploration and eXploitation for AI Agents [103.68453326880456]
We propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX.
REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance.
arXiv Detail & Related papers (2023-07-18T04:26:33Z) - XAI-TRIS: Non-linear image benchmarks to quantify false positive
post-hoc attribution of feature importance [1.3958169829527285]
A lack of formal underpinning leaves it unclear as to what conclusions can safely be drawn from the results of a given XAI method.
This means that challenging non-linear problems, typically solved by deep neural networks, presently lack appropriate remedies.
We show that popular XAI methods are often unable to significantly outperform random performance baselines and edge detection methods.
arXiv Detail & Related papers (2023-06-22T11:31:11Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Data Representing Ground-Truth Explanations to Evaluate XAI Methods [0.0]
Explainable artificial intelligence (XAI) methods are currently evaluated with approaches mostly originated in interpretable machine learning (IML) research.
We propose to represent explanations with canonical equations that can be used to evaluate the accuracy of XAI methods.
arXiv Detail & Related papers (2020-11-18T16:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.