Discrete Subgraph Sampling for Interpretable Graph based Visual Question Answering
- URL: http://arxiv.org/abs/2412.08263v1
- Date: Wed, 11 Dec 2024 10:18:37 GMT
- Title: Discrete Subgraph Sampling for Interpretable Graph based Visual Question Answering
- Authors: Pascal Tilli, Ngoc Thang Vu,
- Abstract summary: We integrate different discrete subset sampling methods into a graph-based visual question answering system.
We show that the integrated methods effectively mitigate the performance trade-off between interpretability and answer accuracy.
We also conduct a human evaluation to assess the interpretability of the generated subgraphs.
- Score: 27.193336817953142
- License:
- Abstract: Explainable artificial intelligence (XAI) aims to make machine learning models more transparent. While many approaches focus on generating explanations post-hoc, interpretable approaches, which generate the explanations intrinsically alongside the predictions, are relatively rare. In this work, we integrate different discrete subset sampling methods into a graph-based visual question answering system to compare their effectiveness in generating interpretable explanatory subgraphs intrinsically. We evaluate the methods on the GQA dataset and show that the integrated methods effectively mitigate the performance trade-off between interpretability and answer accuracy, while also achieving strong co-occurrences between answer and question tokens. Furthermore, we conduct a human evaluation to assess the interpretability of the generated subgraphs using a comparative setting with the extended Bradley-Terry model, showing that the answer and question token co-occurrence metrics strongly correlate with human preferences. Our source code is publicly available.
Related papers
- Intrinsic Subgraph Generation for Interpretable Graph based Visual Question Answering [27.193336817953142]
We introduce an interpretable approach for graph-based Visual Question Answering (VQA)
Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation.
We compare these generated subgraphs against established post-hoc explainability methods for graph neural networks, and perform a human evaluation.
arXiv Detail & Related papers (2024-03-26T12:29:18Z) - Causal Generative Explainers using Counterfactual Inference: A Case
Study on the Morpho-MNIST Dataset [5.458813674116228]
We present a generative counterfactual inference approach to study the influence of visual features as well as causal factors.
We employ visual explanation methods from OmnixAI open source toolkit to compare them with our proposed methods.
This finding suggests that our methods are well-suited for generating highly interpretable counterfactual explanations on causal datasets.
arXiv Detail & Related papers (2024-01-21T04:07:48Z) - Saliency Map Verbalization: Comparing Feature Importance Representations
from Model-free and Instruction-based Methods [6.018950511093273]
Saliency maps can explain a neural model's predictions by identifying important input features.
We formalize the underexplored task of translating saliency maps into natural language.
We compare two novel methods (search-based and instruction-based verbalizations) against conventional feature importance representations.
arXiv Detail & Related papers (2022-10-13T17:48:15Z) - Measuring the Interpretability of Unsupervised Representations via
Quantized Reverse Probing [97.70862116338554]
We investigate the problem of measuring interpretability of self-supervised representations.
We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts.
We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability.
arXiv Detail & Related papers (2022-09-07T16:18:50Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - Visualizing Classifier Adjacency Relations: A Case Study in Speaker
Verification and Voice Anti-Spoofing [72.4445825335561]
We propose a simple method to derive 2D representation from detection scores produced by an arbitrary set of binary classifiers.
Based upon rank correlations, our method facilitates a visual comparison of classifiers with arbitrary scores.
While the approach is fully versatile and can be applied to any detection task, we demonstrate the method using scores produced by automatic speaker verification and voice anti-spoofing systems.
arXiv Detail & Related papers (2021-06-11T13:03:33Z) - Have We Learned to Explain?: How Interpretability Methods Can Learn to
Encode Predictions in their Interpretations [20.441578071446212]
We introduce EVAL-X as a method to quantitatively evaluate interpretations and REAL-X as an amortized explanation method.
We show EVAL-X can detect when predictions are encoded in interpretations and show the advantages of REAL-X through quantitative and radiologist evaluation.
arXiv Detail & Related papers (2021-03-02T17:42:33Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.