Learning Whole Heart Mesh Generation From Patient Images For
Computational Simulations
- URL: http://arxiv.org/abs/2203.10517v2
- Date: Wed, 8 Nov 2023 10:12:08 GMT
- Title: Learning Whole Heart Mesh Generation From Patient Images For
Computational Simulations
- Authors: Fanwei Kong, Shawn Shadden
- Abstract summary: We present a fast and automated deep-learning method to construct simulation-suitable models of the heart from medical images.
The approach constructs meshes from 3D patient images by learning to deform a small set of deformation handles on a whole heart template.
- Score: 2.2496714271898166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Patient-specific cardiac modeling combines geometries of the heart derived
from medical images and biophysical simulations to predict various aspects of
cardiac function. However, generating simulation-suitable models of the heart
from patient image data often requires complicated procedures and significant
human effort. We present a fast and automated deep-learning method to construct
simulation-suitable models of the heart from medical images. The approach
constructs meshes from 3D patient images by learning to deform a small set of
deformation handles on a whole heart template. For both 3D CT and MR data, this
method achieves promising accuracy for whole heart reconstruction, consistently
outperforming prior methods in constructing simulation-suitable meshes of the
heart. When evaluated on time-series CT data, this method produced more
anatomically and temporally consistent geometries than prior methods, and was
able to produce geometries that better satisfy modeling requirements for
cardiac flow simulations. Our source code and pretrained networks are available
at https://github.com/fkong7/HeartDeformNets.
Related papers
- Multi-view Hybrid Graph Convolutional Network for Volume-to-mesh
Reconstruction in Cardiovascular MRI [44.53796049862948]
We introduce HybridVNet, a novel architecture for direct image-to-mesh extraction.
We show it can efficiently handle surface and volumetric meshes by encoding them as graph structures.
Our model combines traditional convolutional networks with variational graph generative models, deep supervision and mesh-specific regularisation.
arXiv Detail & Related papers (2023-11-22T21:51:29Z) - Shape of my heart: Cardiac models through learned signed distance functions [33.29148402516714]
In this work, the cardiac shape is reconstructed by means of three-dimensional deep signed distance functions with Lipschitz regularity.
For this purpose, the shapes of cardiac MRI reconstructions are learned to model the spatial relation of multiple chambers.
We demonstrate that this approach is also capable of reconstructing anatomical models from partial data, such as point clouds from a single ventricle.
arXiv Detail & Related papers (2023-08-31T09:02:53Z) - Neural Deformable Models for 3D Bi-Ventricular Heart Shape
Reconstruction and Modeling from 2D Sparse Cardiac Magnetic Resonance Imaging [8.712578745214138]
We propose a novel neural deformable model (NDM) targeting at the reconstruction and modeling of 3D bi-ventricular shape of the heart.
We model the bi-ventricular shape using blended deformable superquadrics, which are parameterized by a set of geometric parameter functions.
Our NDM can learn to densify a sparse cardiac point cloud with arbitrary scales and generate high-quality triangular meshes automatically.
arXiv Detail & Related papers (2023-07-15T03:11:16Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - A Generative Shape Compositional Framework to Synthesise Populations of
Virtual Chimaeras [52.33206865588584]
We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets.
We build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures.
Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity.
arXiv Detail & Related papers (2022-10-04T13:36:52Z) - Solving Inverse Problems in Medical Imaging with Score-Based Generative
Models [87.48867245544106]
Reconstructing medical images from partial measurements is an important inverse problem in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI)
Existing solutions based on machine learning typically train a model to directly map measurements to medical images.
We propose a fully unsupervised technique for inverse problem solving, leveraging the recently introduced score-based generative models.
arXiv Detail & Related papers (2021-11-15T05:41:12Z) - CNN-based Cardiac Motion Extraction to Generate Deformable Geometric
Left Ventricle Myocardial Models from Cine MRI [0.0]
We propose a framework for the development of patient-specific geometric models of LV myocardium from cine cardiac MR images.
We use the VoxelMorph-based convolutional neural network (CNN) to propagate the isosurface mesh and volume mesh of the end-diastole frame to the subsequent frames of the cardiac cycle.
arXiv Detail & Related papers (2021-03-30T21:34:29Z) - A Deep-Learning Approach For Direct Whole-Heart Mesh Reconstruction [1.8047694351309207]
We propose a novel deep-learning-based approach that directly predicts whole heart surface meshes from volumetric CT and MR image data.
Our method demonstrated promising performance of generating high-resolution and high-quality whole heart reconstructions.
arXiv Detail & Related papers (2021-02-16T00:39:43Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.