SF2Former: Amyotrophic Lateral Sclerosis Identification From
Multi-center MRI Data Using Spatial and Frequency Fusion Transformer
- URL: http://arxiv.org/abs/2302.10859v1
- Date: Tue, 21 Feb 2023 18:16:20 GMT
- Title: SF2Former: Amyotrophic Lateral Sclerosis Identification From
Multi-center MRI Data Using Spatial and Frequency Fusion Transformer
- Authors: Rafsanjany Kushol, Collin C. Luk, Avyarthana Dey, Michael Benatar,
Hannah Briemberg, Annie Dionne, Nicolas Dupr\'e, Richard Frayne, Angela
Genge, Summer Gibson, Simon J. Graham, Lawrence Korngut, Peter Seres, Robert
C. Welsh, Alan Wilman, Lorne Zinman, Sanjay Kalra, Yee-Hong Yang
- Abstract summary: Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder involving motor neuron degeneration.
Deep learning has turned into a prominent class of machine learning programs in computer vision.
This study introduces a framework named SF2Former that leverages vision transformer architecture's power to distinguish the ALS subjects from the control group.
- Score: 3.408266725482757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Amyotrophic Lateral Sclerosis (ALS) is a complex neurodegenerative disorder
involving motor neuron degeneration. Significant research has begun to
establish brain magnetic resonance imaging (MRI) as a potential biomarker to
diagnose and monitor the state of the disease. Deep learning has turned into a
prominent class of machine learning programs in computer vision and has been
successfully employed to solve diverse medical image analysis tasks. However,
deep learning-based methods applied to neuroimaging have not achieved superior
performance in ALS patients classification from healthy controls due to having
insignificant structural changes correlated with pathological features.
Therefore, the critical challenge in deep models is to determine useful
discriminative features with limited training data. By exploiting the
long-range relationship of image features, this study introduces a framework
named SF2Former that leverages vision transformer architecture's power to
distinguish the ALS subjects from the control group. To further improve the
network's performance, spatial and frequency domain information are combined
because MRI scans are captured in the frequency domain before being converted
to the spatial domain. The proposed framework is trained with a set of
consecutive coronal 2D slices, which uses the pre-trained weights on ImageNet
by leveraging transfer learning. Finally, a majority voting scheme has been
employed to those coronal slices of a particular subject to produce the final
classification decision. Our proposed architecture has been thoroughly assessed
with multi-modal neuroimaging data using two well-organized versions of the
Canadian ALS Neuroimaging Consortium (CALSNIC) multi-center datasets. The
experimental results demonstrate the superiority of our proposed strategy in
terms of classification accuracy compared with several popular deep
learning-based techniques.
Related papers
- Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - Applying Conditional Generative Adversarial Networks for Imaging Diagnosis [3.881664394416534]
This study introduces an innovative application of Conditional Generative Adversarial Networks (C-GAN) integrated with Stacked Hourglass Networks (SHGN)
We address the problem of overfitting, common in deep learning models applied to complex imaging datasets, by augmenting data through rotation and scaling.
A hybrid loss function combining L1 and L2 reconstruction losses, enriched with adversarial training, is introduced to refine segmentation processes in intravascular ultrasound (IVUS) imaging.
arXiv Detail & Related papers (2024-07-17T23:23:09Z) - Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - Two-stage MR Image Segmentation Method for Brain Tumors based on
Attention Mechanism [27.08977505280394]
A coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed.
The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module.
The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality.
arXiv Detail & Related papers (2023-04-17T08:34:41Z) - Hierarchical Graph Convolutional Network Built by Multiscale Atlases for
Brain Disorder Diagnosis Using Functional Connectivity [48.75665245214903]
We propose a novel framework to perform multiscale FCN analysis for brain disorder diagnosis.
We first use a set of well-defined multiscale atlases to compute multiscale FCNs.
Then, we utilize biologically meaningful brain hierarchical relationships among the regions in multiscale atlases to perform nodal pooling.
arXiv Detail & Related papers (2022-09-22T04:17:57Z) - Evaluating U-net Brain Extraction for Multi-site and Longitudinal
Preclinical Stroke Imaging [0.4310985013483366]
Convolutional neural networks (CNNs) can improve accuracy and reduce operator time.
We developed a deep-learning mouse brain extraction tool by using a U-net CNN.
We trained, validated, and tested a typical U-net model on 240 multimodal MRI datasets.
arXiv Detail & Related papers (2022-03-11T02:00:27Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Neural Architecture Search for Gliomas Segmentation on Multimodal
Magnetic Resonance Imaging [2.66512000865131]
We propose a neural architecture search (NAS) based solution to brain tumor segmentation tasks on multimodal MRI scans.
The developed solution also integrates normalization and patching strategies tailored for brain MRI processing.
arXiv Detail & Related papers (2020-05-13T14:32:00Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.