Interpretable Representation Learning of Cardiac MRI via Attribute Regularization
- URL: http://arxiv.org/abs/2406.08282v2
- Date: Fri, 5 Jul 2024 08:29:27 GMT
- Title: Interpretable Representation Learning of Cardiac MRI via Attribute Regularization
- Authors: Maxime Di Folco, Cosmin I. Bercea, Emily Chan, Julia A. Schnabel,
- Abstract summary: Interpretability is essential in medical imaging to ensure that clinicians can comprehend and trust artificial intelligence models.
We propose an Attributed-regularized Soft Introspective Variational Autoencoder that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder.
- Score: 2.0221870195041087
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Interpretability is essential in medical imaging to ensure that clinicians can comprehend and trust artificial intelligence models. Several approaches have been recently considered to encode attributes in the latent space to enhance its interpretability. Notably, attribute regularization aims to encode a set of attributes along the dimensions of a latent representation. However, this approach is based on Variational AutoEncoder and suffers from blurry reconstruction. In this paper, we propose an Attributed-regularized Soft Introspective Variational Autoencoder that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder. We demonstrate on short-axis cardiac Magnetic Resonance images of the UK Biobank the ability of the proposed method to address blurry reconstruction issues of variational autoencoder methods while preserving the latent space interpretability.
Related papers
- Attribute Regularized Soft Introspective Variational Autoencoder for
Interpretable Cardiac Disease Classification [2.4828003234992666]
Interpretability is essential to ensure that clinicians can comprehend and trust artificial intelligence models.
We propose a novel interpretable approach that combines attribute regularization of the latent space within the framework of an adversarially trained variational autoencoder.
arXiv Detail & Related papers (2023-12-14T13:20:57Z) - Additive Decoders for Latent Variables Identification and
Cartesian-Product Extrapolation [27.149540844056023]
We tackle the problems of latent variables identification and out-of-support'' image generation in representation learning.
We show that both are possible for a class of decoders that we call additive.
We show theoretically that additive decoders can generate novel images by recombining observed factors of variations in novel ways.
arXiv Detail & Related papers (2023-07-05T18:48:20Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Attri-VAE: attribute-based, disentangled and interpretable
representations of medical images with variational autoencoders [0.5451140334681147]
We propose a VAE approach that includes an attribute regularization term to associate clinical and medical imaging attributes with different regularized dimensions in the generated latent space.
The proposed model provided an excellent trade-off between reconstruction fidelity, disentanglement, and interpretability, outperforming state-of-the-art VAE approaches.
arXiv Detail & Related papers (2022-03-20T00:19:40Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
Prediction [8.152884957975354]
We propose a novel framework for image-based classification based on a variational autoencoder (VAE)
The VAE disentangles the latent space based on explanations' drawn from existing clinical knowledge.
We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images.
arXiv Detail & Related papers (2020-06-24T15:35:47Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Attribute-based Regularization of Latent Spaces for Variational
Auto-Encoders [79.68916470119743]
We present a novel method to structure the latent space of a Variational Auto-Encoder (VAE) to encode different continuous-valued attributes explicitly.
This is accomplished by using an attribute regularization loss which enforces a monotonic relationship between the attribute values and the latent code of the dimension along which the attribute is to be encoded.
arXiv Detail & Related papers (2020-04-11T20:53:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.