Explainability Requires Interactivity
- URL: http://arxiv.org/abs/2109.07869v1
- Date: Thu, 16 Sep 2021 11:02:25 GMT
- Title: Explainability Requires Interactivity
- Authors: Matthias Kirchler, Martin Graf, Marius Kloft, Christoph Lippert
- Abstract summary: We introduce an interactive framework to understand the highly complex decision boundaries of modern vision models.
It allows the user to exhaustively inspect, probe, and test a network's decisions.
- Score: 13.381840447825969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When explaining the decisions of deep neural networks, simple stories are
tempting but dangerous. Especially in computer vision, the most popular
explanation approaches give a false sense of comprehension to its users and
provide an overly simplistic picture. We introduce an interactive framework to
understand the highly complex decision boundaries of modern vision models. It
allows the user to exhaustively inspect, probe, and test a network's decisions.
Across a range of case studies, we compare the power of our interactive
approach to static explanation methods, showing how these can lead a user
astray, with potentially severe consequences.
Related papers
- Interactivity x Explainability: Toward Understanding How Interactivity Can Improve Computer Vision Explanations [29.91211251232355]
We investigate interactivity as a mechanism for tackling issues in three common explanation types: heatmap-based, concept-based, and prototype-based explanations.
We found that while interactivity enhances user control, facilitates rapid convergence to relevant information, it also introduces new challenges.
To address these, we provide design recommendations for interactive computer vision explanations, including carefully selected default views, independent input controls, and constrained output spaces.
arXiv Detail & Related papers (2025-04-14T22:35:26Z) - VITAL: More Understandable Feature Visualization through Distribution Alignment and Relevant Information Flow [57.96482272333649]
Feature visualization (FV) is a powerful tool to decode what information neurons are responding to.
We propose to guide FV through statistics of prototypical image features combined with measures of relevant network flow to generate images.
Our approach yields human-understandable visualizations that both qualitatively and quantitatively improve over state-of-the-art FVs.
arXiv Detail & Related papers (2025-03-28T13:08:18Z) - From Pixels to Words: Leveraging Explainability in Face Recognition through Interactive Natural Language Processing [2.7568948557193287]
Face Recognition (FR) has advanced significantly with the development of deep learning, achieving high accuracy in several applications.
The lack of interpretability of these systems raises concerns about their accountability, fairness, and reliability.
We propose an interactive framework to enhance the explainability of FR models by combining model-agnostic Explainable Artificial Intelligence (XAI) and Natural Language Processing (NLP) techniques.
arXiv Detail & Related papers (2024-09-24T13:40:39Z) - Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models [1.3597551064547502]
In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem.
Traditional xAI methods concentrate on visualizing input features that influence model predictions.
We present an interaction-based xAI method that enhances user comprehension of image classification models through their interaction.
arXiv Detail & Related papers (2024-04-15T14:26:00Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Explaining Agent Behavior with Large Language Models [7.128139268426959]
We propose an approach to generate natural language explanations for an agent's behavior based only on observations of states and actions.
We show how a compact representation of the agent's behavior can be learned and used to produce plausible explanations.
arXiv Detail & Related papers (2023-09-19T06:13:24Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for
Learned Systems [0.0]
Fanoos is a framework for combining formal verification techniques, search, and user interaction to explore explanations at the desired level of granularity and fidelity.
We demonstrate the ability of Fanoos to produce and adjust the abstractness of explanations in response to user requests on a learned controller for an inverted double pendulum and on a learned CPU usage model.
arXiv Detail & Related papers (2020-06-22T17:35:53Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.