A First Look: Towards Explainable TextVQA Models via Visual and Textual
Explanations
- URL: http://arxiv.org/abs/2105.02626v1
- Date: Thu, 29 Apr 2021 00:36:17 GMT
- Title: A First Look: Towards Explainable TextVQA Models via Visual and Textual
Explanations
- Authors: Varun Nagaraj Rao, Xingjian Zhen, Karen Hovsepian, Mingwei Shen
- Abstract summary: We propose MTXNet, an end-to-end trainable multimodal architecture to generate multimodal explanations.
We show that training with multimodal explanations surpasses unimodal baselines by up to 7% in CIDEr scores and 2% in IoU.
We also describe a real-world e-commerce application for using the generated multimodal explanations.
- Score: 3.7638008383533856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable deep learning models are advantageous in many situations. Prior
work mostly provide unimodal explanations through post-hoc approaches not part
of the original system design. Explanation mechanisms also ignore useful
textual information present in images. In this paper, we propose MTXNet, an
end-to-end trainable multimodal architecture to generate multimodal
explanations, which focuses on the text in the image. We curate a novel dataset
TextVQA-X, containing ground truth visual and multi-reference textual
explanations that can be leveraged during both training and evaluation. We then
quantitatively show that training with multimodal explanations complements
model performance and surpasses unimodal baselines by up to 7% in CIDEr scores
and 2% in IoU. More importantly, we demonstrate that the multimodal
explanations are consistent with human interpretations, help justify the
models' decision, and provide useful insights to help diagnose an incorrect
prediction. Finally, we describe a real-world e-commerce application for using
the generated multimodal explanations.
Related papers
- MEGL: Multimodal Explanation-Guided Learning [23.54169888224728]
We propose a novel Multimodal Explanation-Guided Learning (MEGL) framework to enhance model interpretability and improve classification performance.
Our Saliency-Driven Textual Grounding (SDTG) approach integrates spatial information from visual explanations into textual rationales, providing spatially grounded and contextually rich explanations.
We validate MEGL on two new datasets, Object-ME and Action-ME, for image classification with multimodal explanations.
arXiv Detail & Related papers (2024-11-20T05:57:00Z) - A Concept-Based Explainability Framework for Large Multimodal Models [52.37626977572413]
We propose a dictionary learning based approach, applied to the representation of tokens.
We show that these concepts are well semantically grounded in both vision and text.
We show that the extracted multimodal concepts are useful to interpret representations of test samples.
arXiv Detail & Related papers (2024-06-12T10:48:53Z) - Diffexplainer: Towards Cross-modal Global Explanations with Diffusion Models [51.21351775178525]
DiffExplainer is a novel framework that, leveraging language-vision models, enables multimodal global explainability.
It employs diffusion models conditioned on optimized text prompts, synthesizing images that maximize class outputs.
The analysis of generated visual descriptions allows for automatic identification of biases and spurious features.
arXiv Detail & Related papers (2024-04-03T10:11:22Z) - Language as the Medium: Multimodal Video Classification through text
only [3.744589644319257]
We propose a new model-agnostic approach for generating detailed textual descriptions that captures multimodal video information.
Our method leverages the extensive knowledge learnt by large language models, such as GPT-3.5 or Llama2.
Our evaluations on popular action recognition benchmarks, such as UCF-101 or Kinetics, show these context-rich descriptions can be successfully used in video understanding tasks.
arXiv Detail & Related papers (2023-09-19T17:32:21Z) - Information Screening whilst Exploiting! Multimodal Relation Extraction
with Feature Denoising and Multimodal Topic Modeling [96.75821232222201]
Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation.
We propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting.
arXiv Detail & Related papers (2023-05-19T14:56:57Z) - A Multi-Modal Context Reasoning Approach for Conditional Inference on
Joint Textual and Visual Clues [23.743431157431893]
Conditional inference on joint textual and visual clues is a multi-modal reasoning task.
We propose a Multi-modal Context Reasoning approach, named ModCR.
We conduct extensive experiments on two corresponding data sets and experimental results show significantly improved performance.
arXiv Detail & Related papers (2023-05-08T08:05:40Z) - Multimodal Lecture Presentations Dataset: Understanding Multimodality in
Educational Slides [57.86931911522967]
We test the capabilities of machine learning models in multimodal understanding of educational content.
Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects.
We introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches.
arXiv Detail & Related papers (2022-08-17T05:30:18Z) - Visually-Augmented Language Modeling [137.36789885105642]
We propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling.
With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling.
We evaluate the proposed model on various multimodal commonsense reasoning tasks, which require visual information to excel.
arXiv Detail & Related papers (2022-05-20T13:41:12Z) - REX: Reasoning-aware and Grounded Explanation [30.392986232906107]
We develop a new type of multi-modal explanations that explain the decisions by traversing the reasoning process and grounding keywords in the images.
Second, we identify the critical need to tightly couple important components across the visual and textual modalities for explaining the decisions.
Third, we propose a novel explanation generation method that explicitly models the pairwise correspondence between words and regions of interest.
arXiv Detail & Related papers (2022-03-11T17:28:42Z) - Explanation as a process: user-centric construction of multi-level and
multi-modal explanations [0.34410212782758043]
We present a process-based approach that combines multi-level and multi-modal explanations.
We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model.
arXiv Detail & Related papers (2021-10-07T19:26:21Z) - A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine
Translation [131.33610549540043]
We propose a novel graph-based multi-modal fusion encoder for NMT.
We first represent the input sentence and image using a unified multi-modal graph.
We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations.
arXiv Detail & Related papers (2020-07-17T04:06:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.