Prototype Learning for Explainable Brain Age Prediction
- URL: http://arxiv.org/abs/2306.09858v2
- Date: Mon, 6 Nov 2023 15:22:29 GMT
- Title: Prototype Learning for Explainable Brain Age Prediction
- Authors: Linde S. Hesse, Nicola K. Dinsdale, Ana I. L. Namburete
- Abstract summary: We present ExPeRT, an explainable prototype-based model specifically designed for regression tasks.
Our proposed model makes a sample prediction from the distances to a set of learned prototypes in latent space, using a weighted mean of prototype labels.
Our approach achieved state-of-the-art prediction performance while providing insight into the model's reasoning process.
- Score: 1.104960878651584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The lack of explainability of deep learning models limits the adoption of
such models in clinical practice. Prototype-based models can provide inherent
explainable predictions, but these have predominantly been designed for
classification tasks, despite many important tasks in medical imaging being
continuous regression problems. Therefore, in this work, we present ExPeRT: an
explainable prototype-based model specifically designed for regression tasks.
Our proposed model makes a sample prediction from the distances to a set of
learned prototypes in latent space, using a weighted mean of prototype labels.
The distances in latent space are regularized to be relative to label
differences, and each of the prototypes can be visualized as a sample from the
training set. The image-level distances are further constructed from
patch-level distances, in which the patches of both images are structurally
matched using optimal transport. This thus provides an example-based
explanation with patch-level detail at inference time. We demonstrate our
proposed model for brain age prediction on two imaging datasets: adult MR and
fetal ultrasound. Our approach achieved state-of-the-art prediction performance
while providing insight into the model's reasoning process.
Related papers
- ProtoS-ViT: Visual foundation models for sparse self-explainable classifications [0.6249768559720122]
This work demonstrates how frozen pre-trained ViT backbones can be effectively turned into prototypical models.
ProtoS-ViT surpasses existing prototypical models showing strong performance in terms of accuracy, compactness, and explainability.
arXiv Detail & Related papers (2024-06-14T13:36:30Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - Enhancing Interpretability of Vertebrae Fracture Grading using Human-interpretable Prototypes [7.633493982907541]
We propose a novel interpretable-by-design method, ProtoVerse, to find relevant sub-parts of vertebral fractures (prototypes) that reliably explain the model's decision in a human-understandable way.
We have experimented with the VerSe'19 dataset and outperformed the existing prototype-based method.
arXiv Detail & Related papers (2024-04-03T16:04:59Z) - A Lightweight Generative Model for Interpretable Subject-level Prediction [0.07989135005592125]
We propose a technique for single-subject prediction that is inherently interpretable.
Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions.
arXiv Detail & Related papers (2023-06-19T18:20:29Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Using StyleGAN for Visual Interpretability of Deep Learning Models on
Medical Images [0.7874708385247352]
We propose a new interpretability method that can be used to understand the predictions of any black-box model on images.
A StyleGAN is trained on medical images to provide a mapping between latent vectors and images.
By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions.
arXiv Detail & Related papers (2021-01-19T11:13:20Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z) - Concept Bottleneck Models [79.91795150047804]
State-of-the-art models today do not typically support the manipulation of concepts like "the existence of bone spurs"
We revisit the classic idea of first predicting concepts that are provided at training time, and then using these concepts to predict the label.
On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models.
arXiv Detail & Related papers (2020-07-09T07:47:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.