Technical Note: Feasibility of translating 3.0T-trained Deep-Learning
Segmentation Models Out-of-the-Box on Low-Field MRI 0.55T Knee-MRI of Healthy
Controls
- URL: http://arxiv.org/abs/2310.17152v1
- Date: Thu, 26 Oct 2023 04:52:25 GMT
- Title: Technical Note: Feasibility of translating 3.0T-trained Deep-Learning
Segmentation Models Out-of-the-Box on Low-Field MRI 0.55T Knee-MRI of Healthy
Controls
- Authors: Rupsa Bhattacharjee, Zehra Akkaya, Johanna Luitjens, Pan Su, Yang
Yang, Valentina Pedoia and Sharmila Majumdar
- Abstract summary: We evaluate the feasibility of applying deep learning (DL) enabled algorithms to quantify bilateral knee biomarkers in healthy controls scanned at 0.55T and compared with 3.0T.
Initial results demonstrate a usable to good technical feasibility of translating existing quantitative deep-learning-based image segmentation techniques, trained on 3.0T, out of 0.55T for knee MRI.
The 0.55T low-field sustainable and easy-to-install MRI, as demonstrated, can be utilized for evaluating knee cartilage thickness and bone segmentations aided by established DL algorithms out-of-the-box.
- Score: 4.087907070547308
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the current study, our purpose is to evaluate the feasibility of applying
deep learning (DL) enabled algorithms to quantify bilateral knee biomarkers in
healthy controls scanned at 0.55T and compared with 3.0T. The current study
assesses the performance of standard in-practice bone, and cartilage
segmentation algorithms at 0.55T, both qualitatively and quantitatively, in
terms of comparing segmentation performance, areas of improvement, and
compartment-wise cartilage thickness values between 0.55T vs. 3.0T. Initial
results demonstrate a usable to good technical feasibility of translating
existing quantitative deep-learning-based image segmentation techniques,
trained on 3.0T, out of 0.55T for knee MRI, in a multi-vendor acquisition
environment. Especially in terms of segmenting cartilage compartments, the
models perform almost equivalent to 3.0T in terms of Likert ranking. The 0.55T
low-field sustainable and easy-to-install MRI, as demonstrated, thus, can be
utilized for evaluating knee cartilage thickness and bone segmentations aided
by established DL algorithms trained at higher-field strengths out-of-the-box
initially. This could be utilized at the far-spread point-of-care locations
with a lack of radiologists available to manually segment low-field images, at
least till a decent base of low-field data pool is collated. With further
fine-tuning with manual labeling of low-field data or utilizing synthesized
higher SNR images from low-field images, OA biomarker quantification
performance is potentially guaranteed to be further improved.
Related papers
- Triad: Vision Foundation Model for 3D Magnetic Resonance Imaging [3.7942449131350413]
We propose Triad, a vision foundation model for 3D MRI.
Triad adopts a widely used autoencoder architecture to learn robust representations from 131,170 3D MRI volumes.
We evaluate Triad across three tasks, namely, organ/tumor segmentation, organ/cancer classification, and medical image registration.
arXiv Detail & Related papers (2025-02-19T19:31:52Z) - SAMRI-2: A Memory-based Model for Cartilage and Meniscus Segmentation in 3D MRIs of the Knee Joint [0.7879983966759583]
This study introduces a deep learning (DL) method for cartilage and meniscus segmentation from 3D MRIs using memory-based VFMs.
We trained four AI models-a CNN-based 3D-VNet, two automatic transformer-based models (SaMRI2D and SaMRI3D), and a transformer-based promptable memory-based VFM (SAMRI-2)-on 3D knee MRIs from 270 patients.
SAMRI-2 model, trained with HSS, outperformed all other models, achieving an average improvement of 5 points, with a peak improvement of 12 points for tibial cartilage.
arXiv Detail & Related papers (2025-02-14T21:18:01Z) - Comparative Study of Probabilistic Atlas and Deep Learning Approaches for Automatic Brain Tissue Segmentation from MRI Using N4 Bias Field Correction and Anisotropic Diffusion Pre-processing Techniques [0.0]
This study provides a comparative analysis of various segmentation models, including Probabilistic ATLAS, U-Net, nnU-Net, and LinkNet.
Our results demonstrate that the 3D nnU-Net model outperforms others, achieving the highest mean Dice Coefficient score (0.937 + 0.012)
The findings highlight the superiority of nnU-Net models in brain tissue segmentation, particularly when combined with N4 Bias Field Correction and Anisotropic Diffusion pre-processing techniques.
arXiv Detail & Related papers (2024-11-08T10:07:03Z) - 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation [52.699139151447945]
We propose a novel adaptation method for transferring the segment anything model (SAM) from 2D to 3D for promptable medical image segmentation.
Our model can outperform domain state-of-the-art medical image segmentation models on 3 out of 4 tasks, specifically by 8.25%, 29.87%, and 10.11% for kidney tumor, pancreas tumor, colon cancer segmentation, and achieve similar performance for liver tumor segmentation.
arXiv Detail & Related papers (2023-06-23T12:09:52Z) - Generalizable synthetic MRI with physics-informed convolutional networks [57.628770497971246]
We develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition.
We investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols.
arXiv Detail & Related papers (2023-05-21T21:16:20Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Automated Grading of Radiographic Knee Osteoarthritis Severity Combined
with Joint Space Narrowing [9.56244753914375]
Assessment of knee osteoarthritis (KOA) severity on knee X-rays is a central criteria for the use of total knee.
We propose a novel deep learning-based five-step algorithm to automatically grade KOA from posterior-anterior (PA) views of radiographs.
arXiv Detail & Related papers (2022-03-16T19:54:47Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.