Multi-objective point cloud autoencoders for explainable myocardial
infarction prediction
- URL: http://arxiv.org/abs/2307.11017v1
- Date: Thu, 20 Jul 2023 16:45:16 GMT
- Title: Multi-objective point cloud autoencoders for explainable myocardial
infarction prediction
- Authors: Marcel Beetz, Abhirup Banerjee, Vicente Grau
- Abstract summary: Myocardial infarction is one of the most common causes of death in the world.
Image-based biomarkers fail to capture more complex patterns in the heart's 3D anatomy.
We present the multi-objective point cloud autoencoder as a novel geometric deep learning approach for explainable infarction prediction.
- Score: 4.65840670565844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Myocardial infarction (MI) is one of the most common causes of death in the
world. Image-based biomarkers commonly used in the clinic, such as ejection
fraction, fail to capture more complex patterns in the heart's 3D anatomy and
thus limit diagnostic accuracy. In this work, we present the multi-objective
point cloud autoencoder as a novel geometric deep learning approach for
explainable infarction prediction, based on multi-class 3D point cloud
representations of cardiac anatomy and function. Its architecture consists of
multiple task-specific branches connected by a low-dimensional latent space to
allow for effective multi-objective learning of both reconstruction and MI
prediction, while capturing pathology-specific 3D shape information in an
interpretable latent space. Furthermore, its hierarchical branch design with
point cloud-based deep learning operations enables efficient multi-scale
feature learning directly on high-resolution anatomy point clouds. In our
experiments on a large UK Biobank dataset, the multi-objective point cloud
autoencoder is able to accurately reconstruct multi-temporal 3D shapes with
Chamfer distances between predicted and input anatomies below the underlying
images' pixel resolution. Our method outperforms multiple machine learning and
deep learning benchmarks for the task of incident MI prediction by 19% in terms
of Area Under the Receiver Operating Characteristic curve. In addition, its
task-specific compact latent space exhibits easily separable control and MI
clusters with clinically plausible associations between subject encodings and
corresponding 3D shapes, thus demonstrating the explainability of the
prediction.
Related papers
- μ-Net: A Deep Learning-Based Architecture for μ-CT Segmentation [2.012378666405002]
X-ray computed microtomography (mu-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples.
extracting relevant information from 3D images requires semantic segmentation of the regions of interest.
We propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus.
arXiv Detail & Related papers (2024-06-24T15:29:08Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Modeling 3D cardiac contraction and relaxation with point cloud
deformation networks [4.65840670565844]
We propose the Point Cloud Deformation Network (PCD-Net) as a novel geometric deep learning approach to model 3D cardiac contraction and relaxation.
We evaluate our approach on a large dataset of over 10,000 cases from the UK Biobank study.
arXiv Detail & Related papers (2023-07-20T14:56:29Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal
Distillation for 3D Shape Recognition [55.38462937452363]
We propose a unified multi-view cross-modal distillation architecture, including a pretrained deep image encoder as the teacher and a deep point encoder as the student.
By pair-wise aligning multi-view visual and geometric descriptors, we can obtain more powerful deep point encoders without exhausting and complicated network modification.
arXiv Detail & Related papers (2022-07-07T07:23:20Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Lymphoma segmentation from 3D PET-CT images using a deep evidential
network [20.65641432056608]
An automatic evidential segmentation method is proposed to segment lymphomas from 3D Positron Emission Tomography (PET) and Computed Tomography (CT) images.
The architecture is composed of a deep feature-extraction module and an evidential layer.
The proposed combination of deep feature extraction and evidential segmentation is shown to outperform the baseline UNet model.
arXiv Detail & Related papers (2022-01-31T09:34:38Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Deep learning based geometric registration for medical images: How
accurate can we get without visual features? [5.05806585671215]
Deep learning is driving the development of new approaches for image registration.
In this work we take a look at an exactly opposite approach by investigating a deep learning framework for registration based solely on geometric features and optimisation.
Our experimental validation is conducted on complex key-point graphs of inner lung structures, strongly outperforming dense encoder-decoder networks and other point set registration methods.
arXiv Detail & Related papers (2021-03-01T10:15:47Z) - VC-Net: Deep Volume-Composition Networks for Segmentation and
Visualization of Highly Sparse and Noisy Image Data [13.805816310795256]
We present an end-to-end deep learning method, VC-Net, for robust extraction of 3D microvasculature.
The core novelty is to automatically leverage the volume visualization technique (MIP) to enhance the 3D data exploration.
A multi-stream convolutional neural network is proposed to learn the 3D volume and 2D MIP features respectively and then explore their inter-dependencies in a joint volume-composition embedding space.
arXiv Detail & Related papers (2020-09-14T04:15:02Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.