SIDU: Similarity Difference and Uniqueness Method for Explainable AI
- URL: http://arxiv.org/abs/2006.03122v1
- Date: Thu, 4 Jun 2020 20:33:40 GMT
- Title: SIDU: Similarity Difference and Uniqueness Method for Explainable AI
- Authors: Satya M. Muddamsetty, Mohammad N. S. Jahromi, Thomas B. Moeslund
- Abstract summary: This paper presents a novel visual explanation method for deep learning networks in the form of a saliency map.
The proposed method shows quite promising visual explanations that can gain greater trust of human expert.
- Score: 21.94600656231124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new brand of technical artificial intelligence ( Explainable AI ) research
has focused on trying to open up the 'black box' and provide some
explainability. This paper presents a novel visual explanation method for deep
learning networks in the form of a saliency map that can effectively localize
entire object regions. In contrast to the current state-of-the art methods, the
proposed method shows quite promising visual explanations that can gain greater
trust of human expert. Both quantitative and qualitative evaluations are
carried out on both general and clinical data sets to confirm the effectiveness
of the proposed method.
Related papers
- Explainability in AI Based Applications: A Framework for Comparing Different Techniques [2.5874041837241304]
In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
arXiv Detail & Related papers (2024-10-28T09:45:34Z) - On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis [1.55858752644861]
State of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans.
We introduce a novel model-agnostic post-hoc Explainable AI method demonstrating that it provides meaningful interpretations.
arXiv Detail & Related papers (2024-04-21T07:57:45Z) - A Survey of Explainable Knowledge Tracing [14.472784840283099]
This paper thoroughly analyzes the interpretability of KT algorithms.
Current evaluation methods for explainable knowledge tracing are lacking.
This paper offers some insights into evaluation methods from the perspective of educational stakeholders.
arXiv Detail & Related papers (2024-03-12T03:17:59Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - CRAFT: Concept Recursive Activation FacTorization for Explainability [5.306341151551106]
CRAFT is a novel approach to identify both "what" and "where" by generating concept-based explanations.
We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach.
arXiv Detail & Related papers (2022-11-17T14:22:47Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.