Intrinsic Subgraph Generation for Interpretable Graph based Visual Question Answering
- URL: http://arxiv.org/abs/2403.17647v2
- Date: Wed, 27 Mar 2024 10:07:59 GMT
- Title: Intrinsic Subgraph Generation for Interpretable Graph based Visual Question Answering
- Authors: Pascal Tilli, Ngoc Thang Vu,
- Abstract summary: We introduce an interpretable approach for graph-based Visual Question Answering (VQA)
Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation.
We compare these generated subgraphs against established post-hoc explainability methods for graph neural networks, and perform a human evaluation.
- Score: 27.193336817953142
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The large success of deep learning based methods in Visual Question Answering (VQA) has concurrently increased the demand for explainable methods. Most methods in Explainable Artificial Intelligence (XAI) focus on generating post-hoc explanations rather than taking an intrinsic approach, the latter characterizing an interpretable model. In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset. This approach bridges the gap between interpretability and performance. Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation, providing insight into the decision making. To evaluate the quality of these generated subgraphs, we compare them against established post-hoc explainability methods for graph neural networks, and perform a human evaluation. Moreover, we present quantitative metrics that correlate with the evaluations of human assessors, acting as automatic metrics for the generated explanatory subgraphs. Our implementation is available at https://github.com/DigitalPhonetics/Intrinsic-Subgraph-Generation-for-VQA.
Related papers
- Deep Generative Models for Subgraph Prediction [10.56335881963895]
This paper introduces subgraph queries as a new task for deep graph learning.
Subgraph queries jointly predict the components of a target subgraph based on evidence that is represented by an observed subgraph.
We utilize a probabilistic deep Graph Generative Model to answer subgraph queries.
arXiv Detail & Related papers (2024-08-07T19:24:02Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - GInX-Eval: Towards In-Distribution Evaluation of Graph Neural Network
Explanations [21.997015999698732]
Diverse explainability methods of graph neural networks (GNN) have been developed to highlight the edges and nodes in the graph that contribute the most to the model predictions.
It is not clear yet how to evaluate the correctness of those explanations, whether it is from a human or a model perspective.
We propose GInX-Eval, an evaluation procedure of graph explanations that overcomes the pitfalls of faithfulness.
arXiv Detail & Related papers (2023-09-28T07:56:10Z) - Evaluating Link Prediction Explanations for Graph Neural Networks [0.0]
We provide metrics to assess the quality of link prediction explanations, with or without ground-truth.
We discuss how underlying assumptions and technical details specific to the link prediction task, such as the choice of distance between node embeddings, can influence the quality of the explanations.
arXiv Detail & Related papers (2023-08-03T10:48:37Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering [58.64831511644917]
We introduce an interpretable by design model that factors model decisions into intermediate human-legible explanations.
We show that our inherently interpretable system can improve 4.64% over a comparable black-box system in reasoning-focused questions.
arXiv Detail & Related papers (2023-05-24T08:33:15Z) - ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks [0.745554610293091]
We introduce ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations.
We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet.
Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks.
arXiv Detail & Related papers (2022-03-02T18:16:57Z) - Question-Answer Sentence Graph for Joint Modeling Answer Selection [122.29142965960138]
We train and integrate state-of-the-art (SOTA) models for computing scores between question-question, question-answer, and answer-answer pairs.
Online inference is then performed to solve the AS2 task on unseen queries.
arXiv Detail & Related papers (2022-02-16T05:59:53Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - A Hierarchical Reasoning Graph Neural Network for The Automatic Scoring
of Answer Transcriptions in Video Job Interviews [14.091472037847499]
We propose a Hierarchical Reasoning Graph Neural Network (HRGNN) for the automatic assessment of question-answer pairs.
We employ a semantic-level reasoning graph attention network to model the interaction states of the current QA session.
Finally, we propose a gated recurrent unit encoder to represent the temporal question-answer pairs for the final prediction.
arXiv Detail & Related papers (2020-12-22T12:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.