ASL to PET Translation by a Semi-supervised Residual-based
Attention-guided Convolutional Neural Network
- URL: http://arxiv.org/abs/2103.05116v1
- Date: Mon, 8 Mar 2021 22:06:02 GMT
- Title: ASL to PET Translation by a Semi-supervised Residual-based
Attention-guided Convolutional Neural Network
- Authors: Sahar Yousefi, Hessam Sokooti, Wouter M. Teeuwisse, Dennis F.R.
Heijtel, Aart J. Nederveen, Marius Staring, Matthias J.P. van Osch
- Abstract summary: Arterial Spin Labeling (ASL) MRI is a non-invasive, non-radioactive, and relatively cheap imaging technique for brain hemodynamic measurements.
We propose a convolutional neural network (CNN) based model for translating ASL to PET images.
- Score: 3.2480194378336464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Positron Emission Tomography (PET) is an imaging method that can assess
physiological function rather than structural disturbances by measuring
cerebral perfusion or glucose consumption. However, this imaging technique
relies on injection of radioactive tracers and is expensive. On the contrary,
Arterial Spin Labeling (ASL) MRI is a non-invasive, non-radioactive, and
relatively cheap imaging technique for brain hemodynamic measurements, which
allows quantification to some extent. In this paper we propose a convolutional
neural network (CNN) based model for translating ASL to PET images, which could
benefit patients as well as the healthcare system in terms of expenses and
adverse side effects. However, acquiring a sufficient number of paired ASL-PET
scans for training a CNN is prohibitive for many reasons. To tackle this
problem, we present a new semi-supervised multitask CNN which is trained on
both paired data, i.e. ASL and PET scans, and unpaired data, i.e. only ASL
scans, which alleviates the problem of training a network on limited paired
data. Moreover, we present a new residual-based-attention guided mechanism to
improve the contextual features during the training process. Also, we show that
incorporating T1-weighted scans as an input, due to its high resolution and
availability of anatomical information, improves the results. We performed a
two-stage evaluation based on quantitative image metrics by conducting a 7-fold
cross validation followed by a double-blind observer study. The proposed
network achieved structural similarity index measure (SSIM), mean squared error
(MSE) and peak signal-to-noise ratio (PSNR) values of $0.85\pm0.08$,
$0.01\pm0.01$, and $21.8\pm4.5$ respectively, for translating from 2D ASL and
T1-weighted images to PET data. The proposed model is publicly available via
https://github.com/yousefis/ASL2PET.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - PEMMA: Parameter-Efficient Multi-Modal Adaptation for Medical Image Segmentation [5.056996354878645]
When both CT and PET scans are available, it is common to combine them as two channels of the input to the segmentation model.
This method requires both scan types during training and inference, posing a challenge due to the limited availability of PET scans.
We propose a parameter-efficient multi-modal adaptation framework for lightweight upgrading of a transformer-based segmentation model.
arXiv Detail & Related papers (2024-04-21T16:29:49Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - Multi-delay arterial spin-labeled perfusion estimation with biophysics
simulation and deep learning [3.906145608074501]
A 3D U-Net (QTMnet) was trained to estimate perfusion from 4D tracer propagation images.
Relative error of the synthetic brain ASL image was 7.04% for perfusion Q, lower than the error using single-delay ASL model: 25.15% for Q, and multi-delay ASL model: 12.62% for perfusion Q.
arXiv Detail & Related papers (2023-11-17T16:55:14Z) - Contrastive Diffusion Model with Auxiliary Guidance for Coarse-to-Fine
PET Reconstruction [62.29541106695824]
This paper presents a coarse-to-fine PET reconstruction framework that consists of a coarse prediction module (CPM) and an iterative refinement module (IRM)
By delegating most of the computational overhead to the CPM, the overall sampling speed of our method can be significantly improved.
Two additional strategies, i.e., an auxiliary guidance strategy and a contrastive diffusion strategy, are proposed and integrated into the reconstruction process.
arXiv Detail & Related papers (2023-08-20T04:10:36Z) - Self-Supervised Pre-Training for Deep Image Prior-Based Robust PET Image
Denoising [0.5999777817331317]
Deep image prior (DIP) has been successfully applied to positron emission tomography (PET) image restoration.
We propose a self-supervised pre-training model to improve the DIP-based PET image denoising performance.
arXiv Detail & Related papers (2023-02-27T06:55:00Z) - PCRLv2: A Unified Visual Information Preservation Framework for
Self-supervised Pre-training in Medical Image Analysis [56.63327669853693]
We propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics.
We also address the preservation of scale information, a powerful tool in aiding image understanding.
The proposed unified SSL framework surpasses its self-supervised counterparts on various tasks.
arXiv Detail & Related papers (2023-01-02T17:47:27Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Blindness (Diabetic Retinopathy) Severity Scale Detection [0.0]
Diabetic retinopathy (DR) is a severe complication of diabetes that can cause permanent blindness.
Timely diagnosis and treatment of DR are critical to avoid total loss of vision.
We propose a novel deep learning based method for automatic screening of retinal fundus images.
arXiv Detail & Related papers (2021-10-04T11:31:15Z) - Accurate and Efficient Intracranial Hemorrhage Detection and Subtype
Classification in 3D CT Scans with Convolutional and Long Short-Term Memory
Neural Networks [20.4701676109641]
We present our system for the RSNA Intracranial Hemorrhage Detection challenge.
The proposed system is based on a lightweight deep neural network architecture composed of a convolutional neural network (CNN)
We report a weighted mean log loss of 0.04989 on the final test set, which places us in the top 30 ranking (2%) from a total of 1345 participants.
arXiv Detail & Related papers (2020-08-01T17:28:25Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.