MedTet: An Online Motion Model for 4D Heart Reconstruction
- URL: http://arxiv.org/abs/2412.02589v1
- Date: Tue, 03 Dec 2024 17:18:33 GMT
- Title: MedTet: An Online Motion Model for 4D Heart Reconstruction
- Authors: Yihong Chen, Jiancheng Yang, Deniz Sayin Mercadier, Hieu Le, Pascal Fua,
- Abstract summary: We present a novel approach to reconstruction of 3D cardiac motion from sparse intraoperative data.
Existing methods can accurately reconstruct 3D organ geometries from full 3D volumetric imaging.
We propose a versatile framework for reconstructing 3D motion from such partial data.
- Score: 59.74234226055964
- License:
- Abstract: We present a novel approach to reconstruction of 3D cardiac motion from sparse intraoperative data. While existing methods can accurately reconstruct 3D organ geometries from full 3D volumetric imaging, they cannot be used during surgical interventions where usually limited observed data, such as a few 2D frames or 1D signals, is available in real-time. We propose a versatile framework for reconstructing 3D motion from such partial data. It discretizes the 3D space into a deformable tetrahedral grid with signed distance values, providing implicit unlimited resolution while maintaining explicit control over motion dynamics. Given an initial 3D model reconstructed from pre-operative full volumetric data, our system, equipped with an universal observation encoder, can reconstruct coherent 3D cardiac motion from full 3D volumes, a few 2D MRI slices or even 1D signals. Extensive experiments on cardiac intervention scenarios demonstrate our ability to generate plausible and anatomically consistent 3D motion reconstructions from various sparse real-time observations, highlighting its potential for multimodal cardiac imaging. Our code and model will be made available at https://github.com/Scalsol/MedTet.
Related papers
- Learning Volumetric Neural Deformable Models to Recover 3D Regional Heart Wall Motion from Multi-Planar Tagged MRI [5.166584704137271]
We introduce a novel class of volumetric neural deformable models ($upsilon$NDMs)
Our $upsilon$NDMs represent heart wall geometry and motion through a set of low-dimensional global deformation parameter functions.
We have performed experiments on a large cohort of synthetic 3D regional heart wall motion dataset.
arXiv Detail & Related papers (2024-11-21T19:43:20Z) - DuoLift-GAN:Reconstructing CT from Single-view and Biplanar X-Rays with Generative Adversarial Networks [1.3812010983144802]
We introduce DuoLift Generative Adversarial Networks (DuoLift-GAN), a novel architecture with dual branches that independently elevate 2D images and their features into 3D representations.
These 3D outputs are merged into a unified 3D feature map and decoded into a complete 3D chest volume, enabling richer 3D information capture.
arXiv Detail & Related papers (2024-11-12T17:11:18Z) - Head Pose Estimation and 3D Neural Surface Reconstruction via Monocular Camera in situ for Navigation and Safe Insertion into Natural Openings [4.592222359553848]
We have chosen 3D Slicer as our base platform and monocular cameras are used as sensors.
We used the neural radiance fields (NeRF) algorithm to complete the 3D model reconstruction of the human head.
The individual's head pose, obtained through single-camera vision, is transmitted in real-time to the scene created within 3D Slicer.
arXiv Detail & Related papers (2024-06-18T20:42:09Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - MinD-3D: Reconstruct High-quality 3D objects in Human Brain [50.534007259536715]
Recon3DMind is an innovative task aimed at reconstructing 3D visuals from Functional Magnetic Resonance Imaging (fMRI) signals.
We present the fMRI-Shape dataset, which includes data from 14 participants and features 360-degree videos of 3D objects.
We propose MinD-3D, a novel and effective three-stage framework specifically designed to decode the brain's 3D visual information from fMRI signals.
arXiv Detail & Related papers (2023-12-12T18:21:36Z) - Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion [67.71624118802411]
We present Farm3D, a method for learning category-specific 3D reconstructors for articulated objects.
We propose a framework that uses an image generator, such as Stable Diffusion, to generate synthetic training data.
Our network can be used for analysis, including monocular reconstruction, or for synthesis, generating articulated assets for real-time applications such as video games.
arXiv Detail & Related papers (2023-04-20T17:59:34Z) - DeepRecon: Joint 2D Cardiac Segmentation and 3D Volume Reconstruction
via A Structure-Specific Generative Method [12.26150675728958]
We propose an end-to-end latent-space-based framework, DeepRecon, that generates multiple clinically essential outcomes.
Our method identifies the optimal latent representation of the cine image that contains accurate semantic information for cardiac structures.
In particular, our model jointly generates synthetic images with accurate semantic information and segmentation of the cardiac structures.
arXiv Detail & Related papers (2022-06-14T20:46:11Z) - MoCaNet: Motion Retargeting in-the-wild via Canonicalization Networks [77.56526918859345]
We present a novel framework that brings the 3D motion task from controlled environments to in-the-wild scenarios.
It is capable of body motion from a character in a 2D monocular video to a 3D character without using any motion capture system or 3D reconstruction procedure.
arXiv Detail & Related papers (2021-12-19T07:52:05Z) - DensePose 3D: Lifting Canonical Surface Maps of Articulated Objects to
the Third Dimension [71.71234436165255]
We contribute DensePose 3D, a method that can learn such reconstructions in a weakly supervised fashion from 2D image annotations only.
Because it does not require 3D scans, DensePose 3D can be used for learning a wide range of articulated categories such as different animal species.
We show significant improvements compared to state-of-the-art non-rigid structure-from-motion baselines on both synthetic and real data on categories of humans and animals.
arXiv Detail & Related papers (2021-08-31T18:33:55Z) - 3D Probabilistic Segmentation and Volumetry from 2D projection images [10.32519161805588]
X-Ray imaging is quick, cheap and useful for front-line care assessment and intra-operative real-time imaging.
It suffers from projective information loss and lacks vital information on which many essential diagnostic biomarkers are based.
In this paper we explore probabilistic methods to reconstruct 3D volumetric images from 2D imaging modalities.
arXiv Detail & Related papers (2020-06-23T08:00:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.