Review of data types and model dimensionality for cardiac DTI
SMS-related artefact removal
- URL: http://arxiv.org/abs/2209.09522v1
- Date: Tue, 20 Sep 2022 07:41:24 GMT
- Title: Review of data types and model dimensionality for cardiac DTI
SMS-related artefact removal
- Authors: Michael Tanzer, Sea Hee Yook, Guang Yang, Daniel Rueckert, Sonia
Nielles-Vallespin
- Abstract summary: We compare the effect of several input types (magnitude images vs complex images), multiple dimensionalities (2D vs 3D operations), and multiple input types (single slice vs multi-slice) on the performance of a model trained to remove artefacts.
Despite our initial intuition, our experiments show that, for a fixed number of parameters, simpler 2D real-valued models outperform their more advanced 3D or complex counterparts.
- Score: 7.497343031315105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As diffusion tensor imaging (DTI) gains popularity in cardiac imaging due to
its unique ability to non-invasively assess the cardiac microstructure, deep
learning-based Artificial Intelligence is becoming a crucial tool in mitigating
some of its drawbacks, such as the long scan times. As it often happens in
fast-paced research environments, a lot of emphasis has been put on showing the
capability of deep learning while often not enough time has been spent
investigating what input and architectural properties would benefit cardiac DTI
acceleration the most. In this work, we compare the effect of several input
types (magnitude images vs complex images), multiple dimensionalities (2D vs 3D
operations), and multiple input types (single slice vs multi-slice) on the
performance of a model trained to remove artefacts caused by a simultaneous
multi-slice (SMS) acquisition. Despite our initial intuition, our experiments
show that, for a fixed number of parameters, simpler 2D real-valued models
outperform their more advanced 3D or complex counterparts. The best performance
is although obtained by a real-valued model trained using both the magnitude
and phase components of the acquired data. We believe this behaviour to be due
to real-valued models making better use of the lower number of parameters, and
to 3D models not being able to exploit the spatial information because of the
low SMS acceleration factor used in our experiments.
Related papers
- Utilizing Machine Learning and 3D Neuroimaging to Predict Hearing Loss: A Comparative Analysis of Dimensionality Reduction and Regression Techniques [0.0]
We have explored machine learning approaches for predicting hearing loss thresholds on the brain's gray matter 3D images.
In the first phase, we used a 3D CNN model to reduce high-dimensional input into latent space.
In the second phase, we utilized this model to reduce input into rich features.
arXiv Detail & Related papers (2024-04-30T18:39:41Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - Spatiotemporal Modeling Encounters 3D Medical Image Analysis:
Slice-Shift UNet with Multi-View Fusion [0.0]
We propose a new 2D-based model dubbed Slice SHift UNet which encodes three-dimensional features at 2D CNN's complexity.
More precisely multi-view features are collaboratively learned by performing 2D convolutions along the three planes of a volume.
The effectiveness of our approach is validated in Multi-Modality Abdominal Multi-Organ axis (AMOS) and Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) datasets.
arXiv Detail & Related papers (2023-07-24T14:53:23Z) - Video Pretraining Advances 3D Deep Learning on Chest CT Tasks [63.879848037679224]
Pretraining on large natural image classification datasets has aided model development on data-scarce 2D medical tasks.
These 2D models have been surpassed by 3D models on 3D computer vision benchmarks.
We show video pretraining for 3D models can enable higher performance on smaller datasets for 3D medical tasks.
arXiv Detail & Related papers (2023-04-02T14:46:58Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - Fast mesh denoising with data driven normal filtering using deep
variational autoencoders [6.25118865553438]
We propose a fast and robust denoising method for dense 3D scanned industrial models.
The proposed approach employs conditional variational autoencoders to effectively filter face normals.
For 3D models with more than 1e4 faces, the presented pipeline is twice as fast as methods with equivalent reconstruction error.
arXiv Detail & Related papers (2021-11-24T20:25:15Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Synthetic Training for Monocular Human Mesh Recovery [100.38109761268639]
This paper aims to estimate 3D mesh of multiple body parts with large-scale differences from a single RGB image.
The main challenge is lacking training data that have complete 3D annotations of all body parts in 2D images.
We propose a depth-to-scale (D2S) projection to incorporate the depth difference into the projection function to derive per-joint scale variants.
arXiv Detail & Related papers (2020-10-27T03:31:35Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - 3D Human Pose Estimation using Spatio-Temporal Networks with Explicit
Occlusion Training [40.933783830017035]
Estimating 3D poses from a monocular task is still a challenging task, despite the significant progress that has been made in recent years.
We introduce a-temporal video network for robust 3D human pose estimation.
We apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multistride temporal convolutional net-works (TCNs) to estimate 3D joints or keypoints.
arXiv Detail & Related papers (2020-04-07T09:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.