FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
Explainable AI Methods
- URL: http://arxiv.org/abs/2308.06248v1
- Date: Fri, 11 Aug 2023 17:29:02 GMT
- Title: FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of
Explainable AI Methods
- Authors: Robin Hesse, Simone Schaub-Meyer, Stefan Roth
- Abstract summary: XAI inherently lacks ground-truth explanations, making its automatic evaluation an unsolved problem.
We propose a novel synthetic vision dataset, named FunnyBirds, and accompanying automatic evaluation protocols.
Using our tools, we report results for 24 different combinations of neural models and XAI methods.
- Score: 15.073405675079558
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of explainable artificial intelligence (XAI) aims to uncover the
inner workings of complex deep neural models. While being crucial for
safety-critical domains, XAI inherently lacks ground-truth explanations, making
its automatic evaluation an unsolved problem. We address this challenge by
proposing a novel synthetic vision dataset, named FunnyBirds, and accompanying
automatic evaluation protocols. Our dataset allows performing semantically
meaningful image interventions, e.g., removing individual object parts, which
has three important implications. First, it enables analyzing explanations on a
part level, which is closer to human comprehension than existing methods that
evaluate on a pixel level. Second, by comparing the model output for inputs
with removed parts, we can estimate ground-truth part importances that should
be reflected in the explanations. Third, by mapping individual explanations
into a common space of part importances, we can analyze a variety of different
explanation types in a single common framework. Using our tools, we report
results for 24 different combinations of neural models and XAI methods,
demonstrating the strengths and weaknesses of the assessed methods in a fully
automatic and systematic manner.
Related papers
- Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features [19.15360328688008]
We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features.
The framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable.
arXiv Detail & Related papers (2024-08-30T10:52:18Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - A Multimodal Automated Interpretability Agent [63.8551718480664]
MAIA is a system that uses neural models to automate neural model understanding tasks.
We first characterize MAIA's ability to describe (neuron-level) features in learned representations of images.
We then show that MAIA can aid in two additional interpretability tasks: reducing sensitivity to spurious features, and automatically identifying inputs likely to be mis-classified.
arXiv Detail & Related papers (2024-04-22T17:55:11Z) - Be Careful When Evaluating Explanations Regarding Ground Truth [11.340743580750642]
evaluating explanations of images regarding ground truth primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.
We propose a framework for $textitjointly$ evaluating the discrepancy of systems that align with an explanation system.
arXiv Detail & Related papers (2023-11-08T16:39:13Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Explainable, Domain-Adaptive, and Federated Artificial Intelligence in
Medicine [5.126042819606137]
We focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.
Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.
Federated learning enables learning large-scale models without exposing sensitive personal health information.
arXiv Detail & Related papers (2022-11-17T03:32:00Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - What Do You See? Evaluation of Explainable Artificial Intelligence (XAI)
Interpretability through Neural Backdoors [15.211935029680879]
EXplainable AI (XAI) methods have been proposed to interpret how a deep neural network predicts inputs.
Current evaluation approaches either require subjective input from humans or incur high computation cost with automated evaluation.
We propose backdoor trigger patterns--hidden malicious functionalities that cause misclassification--to automate the evaluation of saliency explanations.
arXiv Detail & Related papers (2020-09-22T15:53:19Z) - DRG: Dual Relation Graph for Human-Object Interaction Detection [65.50707710054141]
We tackle the challenging problem of human-object interaction (HOI) detection.
Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features.
In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph.
arXiv Detail & Related papers (2020-08-26T17:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.