Towards Self-Explainability of Deep Neural Networks with Heatmap
Captioning and Large-Language Models
- URL: http://arxiv.org/abs/2304.02202v1
- Date: Wed, 5 Apr 2023 03:29:37 GMT
- Title: Towards Self-Explainability of Deep Neural Networks with Heatmap
Captioning and Large-Language Models
- Authors: Osman Tursun, Simon Denman, Sridha Sridharan, and Clinton Fookes
- Abstract summary: We propose a framework that includes two modules: (1) context modelling and (2) reasoning.
The code for the proposed template-based heatmap captioning approach will be publicly available.
- Score: 38.61856988422258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heatmaps are widely used to interpret deep neural networks, particularly for
computer vision tasks, and the heatmap-based explainable AI (XAI) techniques
are a well-researched topic. However, most studies concentrate on enhancing the
quality of the generated heatmap or discovering alternate heatmap generation
techniques, and little effort has been devoted to making heatmap-based XAI
automatic, interactive, scalable, and accessible. To address this gap, we
propose a framework that includes two modules: (1) context modelling and (2)
reasoning. We proposed a template-based image captioning approach for context
modelling to create text-based contextual information from the heatmap and
input data. The reasoning module leverages a large language model to provide
explanations in combination with specialised knowledge. Our qualitative
experiments demonstrate the effectiveness of our framework and heatmap
captioning approach. The code for the proposed template-based heatmap
captioning approach will be publicly available.
Related papers
- FUSE-ing Language Models: Zero-Shot Adapter Discovery for Prompt Optimization Across Tokenizers [55.2480439325792]
We propose FUSE, an approach to approximating an adapter layer that maps from one model's textual embedding space to another, even across different tokenizers.
We show the efficacy of our approach via multi-objective optimization over vision-language and causal language models for image captioning and sentiment-based image captioning.
arXiv Detail & Related papers (2024-08-09T02:16:37Z) - Advanced Multimodal Deep Learning Architecture for Image-Text Matching [33.8315200009152]
Image-text matching is a key multimodal task that aims to model the semantic association between images and text as a matching relationship.
We introduce an advanced multimodal deep learning architecture, which combines the high-level abstract representation ability of deep neural networks for visual information with the advantages of natural language processing models for text semantic understanding.
Experiments show that compared with existing image-text matching models, the optimized new model has significantly improved performance on a series of benchmark data sets.
arXiv Detail & Related papers (2024-06-13T08:32:24Z) - LICO: Explainable Models with Language-Image Consistency [39.869639626266554]
This paper develops a Language-Image COnsistency model for explainable image classification, termed LICO.
We first establish a coarse global manifold structure alignment by minimizing the distance between the distributions of image and language features.
We then achieve fine-grained saliency maps by applying optimal transport (OT) theory to assign local feature maps with class-specific prompts.
arXiv Detail & Related papers (2023-10-15T12:44:33Z) - Hypernymy Understanding Evaluation of Text-to-Image Models via WordNet
Hierarchy [12.82992353036576]
We measure the capability of popular text-to-image models to understand $textithypernymy$, or the "is-a" relation between words.
We show how our metrics can provide a better understanding of the individual strengths and weaknesses of popular text-to-image models.
arXiv Detail & Related papers (2023-10-13T16:53:25Z) - Multi-modal reward for visual relationships-based image captioning [4.354364351426983]
This paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image.
A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space.
arXiv Detail & Related papers (2023-03-19T20:52:44Z) - Self-Supervised Image-to-Text and Text-to-Image Synthesis [23.587581181330123]
We propose a novel self-supervised deep learning based approach towards learning the cross-modal embedding spaces.
In our approach, we first obtain dense vector representations of images using StackGAN-based autoencoder model and also dense vector representations on sentence-level utilizing LSTM based text-autoencoder.
arXiv Detail & Related papers (2021-12-09T13:54:56Z) - Video-Text Pre-training with Learned Regions [59.30893505895156]
Video-Text pre-training aims at learning transferable representations from large-scale video-text pairs.
We propose a module for videotext-learning, RegionLearner, which can take into account the structure of objects during pre-training on large-scale video-text pairs.
arXiv Detail & Related papers (2021-12-02T13:06:53Z) - Matching Visual Features to Hierarchical Semantic Topics for Image
Paragraph Captioning [50.08729005865331]
This paper develops a plug-and-play hierarchical-topic-guided image paragraph generation framework.
To capture the correlations between the image and text at multiple levels of abstraction, we design a variational inference network.
To guide the paragraph generation, the learned hierarchical topics and visual features are integrated into the language model.
arXiv Detail & Related papers (2021-05-10T06:55:39Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - Improving Image Captioning with Better Use of Captions [65.39641077768488]
We present a novel image captioning architecture to better explore semantics available in captions and leverage that to enhance both image representation and caption generation.
Our models first construct caption-guided visual relationship graphs that introduce beneficial inductive bias using weakly supervised multi-instance learning.
During generation, the model further incorporates visual relationships using multi-task learning for jointly predicting word and object/predicate tag sequences.
arXiv Detail & Related papers (2020-06-21T14:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.