Explainable Image Captioning using CNN- CNN architecture and Hierarchical Attention
- URL: http://arxiv.org/abs/2407.09556v1
- Date: Fri, 28 Jun 2024 16:27:47 GMT
- Title: Explainable Image Captioning using CNN- CNN architecture and Hierarchical Attention
- Authors: Rishi Kesav Mohan, Sanjay Sureshkumar, Vignesh Sivasubramaniam,
- Abstract summary: Explainable AI is an approach where a conventional method is approached in a way that the model or the algorithm's predictions can be explainable and justifiable.
A newer architecture with a CNN decoder and hierarchical attention concept has been used to increase speed and accuracy of caption generation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image captioning is a technology that produces text-based descriptions for an image. Deep learning-based solutions built on top of feature recognition may very well serve the purpose. But as with any other machine learning solution, the user understanding in the process of caption generation is poor and the model does not provide any explanation for its predictions and hence the conventional methods are also referred to as Black-Box methods. Thus, an approach where the model's predictions are trusted by the user is needed to appreciate interoperability. Explainable AI is an approach where a conventional method is approached in a way that the model or the algorithm's predictions can be explainable and justifiable. Thus, this article tries to approach image captioning using Explainable AI such that the resulting captions generated by the model can be Explained and visualized. A newer architecture with a CNN decoder and hierarchical attention concept has been used to increase speed and accuracy of caption generation. Also, incorporating explainability to a model makes it more trustable when used in an application. The model is trained and evaluated using MSCOCO dataset and both quantitative and qualitative results are presented in this article.
Related papers
- Explainable Concept Generation through Vision-Language Preference Learning [7.736445799116692]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc.
We devise a reinforcement learning-based preference optimization algorithm that fine-tunes the vision-language generative model.
In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
arXiv Detail & Related papers (2024-08-24T02:26:42Z) - VALE: A Multimodal Visual and Language Explanation Framework for Image Classifiers using eXplainable AI and Language Models [0.0]
We propose a novel framework named VALE Visual and Language Explanation.
VALE integrates explainable AI techniques with advanced language models to provide comprehensive explanations.
In this paper, we conduct a pilot study of the VALE framework for image classification tasks.
arXiv Detail & Related papers (2024-08-23T03:02:11Z) - TextCAVs: Debugging vision models using text [37.4673705484723]
We introduce TextCAVs: a novel method which creates concept activation vectors (CAVs) using text descriptions of the concept.
In early experimental results, we demonstrate that TextCAVs produces reasonable explanations for a chest x-ray dataset (MIMIC-CXR) and natural images (ImageNet)
arXiv Detail & Related papers (2024-08-16T10:36:08Z) - Towards Retrieval-Augmented Architectures for Image Captioning [81.11529834508424]
This work presents a novel approach towards developing image captioning models that utilize an external kNN memory to improve the generation process.
Specifically, we propose two model variants that incorporate a knowledge retriever component that is based on visual similarities.
We experimentally validate our approach on COCO and nocaps datasets and demonstrate that incorporating an explicit external memory can significantly enhance the quality of captions.
arXiv Detail & Related papers (2024-05-21T18:02:07Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - Greybox XAI: a Neural-Symbolic learning framework to produce
interpretable predictions for image classification [6.940242990198]
Greybox XAI is a framework that composes a DNN and a transparent model thanks to the use of a symbolic Knowledge Base (KB)
We address the problem of the lack of universal criteria for XAI by formalizing what an explanation is.
We show how this new architecture is accurate and explainable in several datasets.
arXiv Detail & Related papers (2022-09-26T08:55:31Z) - Retrieval-Augmented Transformer for Image Captioning [51.79146669195357]
We develop an image captioning approach with a kNN memory, with which knowledge can be retrieved from an external corpus to aid the generation process.
Our architecture combines a knowledge retriever based on visual similarities, a differentiable encoder, and a kNN-augmented attention layer to predict tokens.
Experimental results, conducted on the COCO dataset, demonstrate that employing an explicit external memory can aid the generation process and increase caption quality.
arXiv Detail & Related papers (2022-07-26T19:35:49Z) - DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting [91.56988987393483]
We present a new framework for dense prediction by implicitly and explicitly leveraging the pre-trained knowledge from CLIP.
Specifically, we convert the original image-text matching problem in CLIP to a pixel-text matching problem and use the pixel-text score maps to guide the learning of dense prediction models.
Our method is model-agnostic, which can be applied to arbitrary dense prediction systems and various pre-trained visual backbones.
arXiv Detail & Related papers (2021-12-02T18:59:32Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Neuro-Symbolic Representations for Video Captioning: A Case for
Leveraging Inductive Biases for Vision and Language [148.0843278195794]
We propose a new model architecture for learning multi-modal neuro-symbolic representations for video captioning.
Our approach uses a dictionary learning-based method of learning relations between videos and their paired text descriptions.
arXiv Detail & Related papers (2020-11-18T20:21:19Z) - LIMEADE: From AI Explanations to Advice Taking [34.581205516506614]
We introduce LIMEADE, the first framework that translates both positive and negative advice into an update to an arbitrary, underlying opaque model.
We show our method improves accuracy compared to a rigorous baseline on the image classification domains.
For the text modality, we apply our framework to a neural recommender system for scientific papers on a public website.
arXiv Detail & Related papers (2020-03-09T18:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.