A Question-Centric Model for Visual Question Answering in Medical
Imaging
- URL: http://arxiv.org/abs/2003.08760v1
- Date: Mon, 2 Mar 2020 10:16:16 GMT
- Title: A Question-Centric Model for Visual Question Answering in Medical
Imaging
- Authors: Minh H. Vu, Tommy L\"ofstedt, Tufve Nyholm, Raphael Sznitman
- Abstract summary: We present a novel Visual Question Answering approach that allows an image to be queried by means of a written question.
Experiments on a variety of medical and natural image datasets show that by fusing image and question features in a novel way, the proposed approach achieves an equal or higher accuracy compared to current methods.
- Score: 3.619444603816032
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods have proven extremely effective at performing a variety
of medical image analysis tasks. With their potential use in clinical routine,
their lack of transparency has however been one of their few weak points,
raising concerns regarding their behavior and failure modes. While most
research to infer model behavior has focused on indirect strategies that
estimate prediction uncertainties and visualize model support in the input
image space, the ability to explicitly query a prediction model regarding its
image content offers a more direct way to determine the behavior of trained
models. To this end, we present a novel Visual Question Answering approach that
allows an image to be queried by means of a written question. Experiments on a
variety of medical and natural image datasets show that by fusing image and
question features in a novel way, the proposed approach achieves an equal or
higher accuracy compared to current methods.
Related papers
- DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation [11.201840101870808]
We propose an agent model capable of generating counterfactual images that prompt different decisions when plugged into a black box model.
By employing this agent model, we can uncover influential image patterns that impact the black model's final predictions.
We validated our approach in the rigorous domain of medical prognosis tasks.
arXiv Detail & Related papers (2024-06-21T14:27:02Z) - Information Theoretic Text-to-Image Alignment [49.396917351264655]
We present a novel method that relies on an information-theoretic alignment measure to steer image generation.
Our method is on-par or superior to the state-of-the-art, yet requires nothing but a pre-trained denoising network to estimate MI.
arXiv Detail & Related papers (2024-05-31T12:20:02Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Causality-Driven One-Shot Learning for Prostate Cancer Grading from MRI [1.049712834719005]
We present a novel method to automatically classify medical images that learns and leverages weak causal signals in the image.
Our framework consists of a convolutional neural network backbone and a causality-extractor module.
Our findings show that causal relationships among features play a crucial role in enhancing the model's ability to discern relevant information.
arXiv Detail & Related papers (2023-09-19T16:08:33Z) - TorchEsegeta: Framework for Interpretability and Explainability of
Image-based Deep Learning Models [0.0]
Clinicians are often sceptical about applying automatic image processing approaches, especially deep learning based methods, in practice.
This paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas which influence the decision of the algorithm most.
Research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models.
arXiv Detail & Related papers (2021-10-16T01:00:15Z) - Variational Topic Inference for Chest X-Ray Report Generation [102.04931207504173]
Report generation for medical imaging promises to reduce workload and assist diagnosis in clinical practice.
Recent work has shown that deep learning models can successfully caption natural images.
We propose variational topic inference for automatic report generation.
arXiv Detail & Related papers (2021-07-15T13:34:38Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.