Dynamic Image for 3D MRI Image Alzheimer's Disease Classification
- URL: http://arxiv.org/abs/2012.00119v1
- Date: Mon, 30 Nov 2020 21:39:32 GMT
- Title: Dynamic Image for 3D MRI Image Alzheimer's Disease Classification
- Authors: Xin Xing, Gongbo Liang, Hunter Blanton, Muhammad Usman Rafique, Chris
Wang, Ai-Ling Lin, Nathan Jacobs
- Abstract summary: Training a 3D convolutional neural network (CNN) is time-consuming and computationally expensive.
We make use of approximate rank pooling to transform the 3D MRI image volume into a 2D image to use as input to a 2D CNN.
Our proposed CNN model achieves $9.5%$ better Alzheimer's disease classification accuracy than the baseline 3D models.
- Score: 26.296422774282156
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to apply a 2D CNN architecture to 3D MRI image Alzheimer's disease
classification. Training a 3D convolutional neural network (CNN) is
time-consuming and computationally expensive. We make use of approximate rank
pooling to transform the 3D MRI image volume into a 2D image to use as input to
a 2D CNN. We show our proposed CNN model achieves $9.5\%$ better Alzheimer's
disease classification accuracy than the baseline 3D models. We also show that
our method allows for efficient training, requiring only 20% of the training
time compared to 3D CNN models. The code is available online:
https://github.com/UkyVision/alzheimer-project.
Related papers
- PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - Learning 3D Representations from 2D Pre-trained Models via
Image-to-Point Masked Autoencoders [52.91248611338202]
We propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE.
By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding.
I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity.
arXiv Detail & Related papers (2022-12-13T17:59:20Z) - ULIP: Learning a Unified Representation of Language, Images, and Point
Clouds for 3D Understanding [110.07170245531464]
Current 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories.
Recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language.
We learn a unified representation of images, texts, and 3D point clouds by pre-training with object triplets from the three modalities.
arXiv Detail & Related papers (2022-12-10T01:34:47Z) - Decomposing 3D Neuroimaging into 2+1D Processing for Schizophrenia
Recognition [25.80846093248797]
We propose to process the 3D data by a 2+1D framework so that we can exploit the powerful deep 2D Convolutional Neural Network (CNN) networks pre-trained on the huge ImageNet dataset for 3D neuroimaging recognition.
Specifically, 3D volumes of Magnetic Resonance Imaging (MRI) metrics are decomposed to 2D slices according to neighboring voxel positions.
Global pooling is applied to remove redundant information as the activation patterns are sparsely distributed over feature maps.
Channel-wise and slice-wise convolutions are proposed to aggregate the contextual information in the third dimension unprocessed by the 2D CNN model.
arXiv Detail & Related papers (2022-11-21T15:22:59Z) - Efficient brain age prediction from 3D MRI volumes using 2D projections [0.0]
We show that using 2D CNNs on a few 2D projections leads to reasonable test accuracy when predicting the age from brain volumes.
One training epoch with 20,324 subjects takes 40 - 70 seconds using a single GPU, which is almost 100 times faster compared to a small 3D CNN.
arXiv Detail & Related papers (2022-11-10T18:50:10Z) - Introducing Vision Transformer for Alzheimer's Disease classification
task with 3D input [1.0152838128195467]
Do Vision Transformer-based models perform better than CNN-based models?
Is it possible to use a shallow 3D CNN-based model to obtain satisfying results?
Our results indicate that the shallow 3D CNN-based models are sufficient to achieve good classification results for Alzheimer's Disease using MRI scans.
arXiv Detail & Related papers (2022-10-03T18:48:22Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - Continual 3D Convolutional Neural Networks for Real-time Processing of
Videos [93.73198973454944]
We introduce Continual 3D Contemporalal Neural Networks (Co3D CNNs)
Co3D CNNs process videos frame-by-frame rather than by clip by clip.
We show that Co3D CNNs initialised on the weights from preexisting state-of-the-art video recognition models reduce floating point operations for frame-wise computations by 10.0-12.4x while improving accuracy on Kinetics-400 by 2.3-3.8x.
arXiv Detail & Related papers (2021-05-31T18:30:52Z) - 3D Convolutional Neural Networks for Stalled Brain Capillary Detection [72.21315180830733]
Brain vasculature dysfunctions such as stalled blood flow in cerebral capillaries are associated with cognitive decline and pathogenesis in Alzheimer's disease.
Here, we describe a deep learning-based approach for automatic detection of stalled capillaries in brain images based on 3D convolutional neural networks.
In this setting, our approach outperformed other methods and demonstrated state-of-the-art results, achieving 0.85 Matthews correlation coefficient, 85% sensitivity, and 99.3% specificity.
arXiv Detail & Related papers (2021-04-04T20:30:14Z) - 2.75D: Boosting learning by representing 3D Medical imaging to 2D
features for small data [54.223614679807994]
3D convolutional neural networks (CNNs) have started to show superior performance to 2D CNNs in numerous deep learning tasks.
Applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D models.
In this work, we proposed a novel 2D strategical representation of volumetric data, namely 2.75D.
As a result, 2D CNN networks can also be used to learn volumetric information.
arXiv Detail & Related papers (2020-02-11T08:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.