Cardiac MRI Orientation Recognition and Standardization using Deep
Neural Networks
- URL: http://arxiv.org/abs/2308.00615v1
- Date: Mon, 31 Jul 2023 00:01:49 GMT
- Title: Cardiac MRI Orientation Recognition and Standardization using Deep
Neural Networks
- Authors: Ruoxuan Zhen
- Abstract summary: We present a method that employs deep neural networks to categorize and standardize the orientation of cardiac MRI images.
We conducted comprehensive experiments on CMR images from various modalities, including bSSFP, T2, and LGE.
The validation accuracies achieved were 100.0%, 100.0%, and 99.4%, confirming the robustness and effectiveness of our model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Orientation recognition and standardization play a crucial role in the
effectiveness of medical image processing tasks. Deep learning-based methods
have proven highly advantageous in orientation recognition and prediction
tasks. In this paper, we address the challenge of imaging orientation in
cardiac MRI and present a method that employs deep neural networks to
categorize and standardize the orientation. To cater to multiple sequences and
modalities of MRI, we propose a transfer learning strategy, enabling adaptation
of our model from a single modality to diverse modalities. We conducted
comprehensive experiments on CMR images from various modalities, including
bSSFP, T2, and LGE. The validation accuracies achieved were 100.0\%, 100.0\%,
and 99.4\%, confirming the robustness and effectiveness of our model. Our
source code and network models are available at
https://github.com/rxzhen/MSCMR-orient
Related papers
- MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Multi-Objective Learning for Deformable Image Registration [0.0]
Deformable image registration (DIR) involves optimization of multiple conflicting objectives.
In this paper, we combine a recently proposed approach for MO training of neural networks with a well-known deep neural network for DIR.
We evaluate the proposed approach for DIR of pelvic magnetic resonance imaging (MRI) scans.
arXiv Detail & Related papers (2024-02-23T15:42:13Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - RIDE: Self-Supervised Learning of Rotation-Equivariant Keypoint
Detection and Invariant Description for Endoscopy [83.4885991036141]
RIDE is a learning-based method for rotation-equivariant detection and invariant description.
It is trained in a self-supervised manner on a large curation of endoscopic images.
It sets a new state-of-the-art performance on matching and relative pose estimation tasks.
arXiv Detail & Related papers (2023-09-18T08:16:30Z) - Orientation recognition and correction of Cardiac MRI with deep neural
network [0.0]
In this paper, the problem of orientation correction in cardiac MRI images is investigated and a framework for orientation recognition via deep neural networks is proposed.
For multi-modality MRI, we introduce a transfer learning strategy to transfer our proposed model from single modality to multi-modality.
We embed the proposed network into the orientation correction command-line tool, which can implement orientation correction on 2D DICOM and 3D NIFTI images.
arXiv Detail & Related papers (2022-11-21T10:37:50Z) - Recognition of Cardiac MRI Orientation via Deep Neural Networks and a
Method to Improve Prediction Accuracy [0.0]
In most medical image processing tasks, the orientation of an image would affect computing result.
We study the problem of recognizing orientation in cardiac MRI and using deep neural network to solve this problem.
arXiv Detail & Related papers (2022-11-14T03:35:15Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Recognition and standardization of cardiac MRI orientation via
multi-tasking learning and deep neural networks [16.188681108101196]
We study the problem of imaging orientation in cardiac MRI, and propose a framework to categorize the orientation for recognition and standardization via deep neural networks.
The method uses a new multi-tasking strategy, where both the tasks of cardiac segmentation and orientation recognition are simultaneously achieved.
For multiple sequences and modalities of MRI, we propose a transfer learning strategy, which adapts our proposed model from a single modality to multiple modalities.
arXiv Detail & Related papers (2020-11-17T16:41:31Z) - Modality Compensation Network: Cross-Modal Adaptation for Action
Recognition [77.24983234113957]
We propose a Modality Compensation Network (MCN) to explore the relationships of different modalities.
Our model bridges data from source and auxiliary modalities by a modality adaptation block to achieve adaptive representation learning.
Experimental results reveal that MCN outperforms state-of-the-art approaches on four widely-used action recognition benchmarks.
arXiv Detail & Related papers (2020-01-31T04:51:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.