Transferring Models Trained on Natural Images to 3D MRI via Position
Encoded Slice Models
- URL: http://arxiv.org/abs/2303.01491v1
- Date: Thu, 2 Mar 2023 18:52:31 GMT
- Title: Transferring Models Trained on Natural Images to 3D MRI via Position
Encoded Slice Models
- Authors: Umang Gupta, Tamoghna Chattopadhyay, Nikhil Dhinagar, Paul M.
Thompson, Greg Ver Steeg, The Alzheimer's Disease Neuroimaging Initiative
(ADNI)
- Abstract summary: 2D-Slice-CNN architecture embeds all the MRI slices with 2D encoders that take 2D image input and combines them via permutation-invariant layers.
With the insight that pretrained model can serve as the 2D encoder, we initialize the 2D encoder with ImageNet pretrained weights that outperform those and trained from scratch on two neuroimaging tasks.
- Score: 14.42534860640976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning has remarkably improved computer vision. These advances
also promise improvements in neuroimaging, where training set sizes are often
small. However, various difficulties arise in directly applying models
pretrained on natural images to radiologic images, such as MRIs. In particular,
a mismatch in the input space (2D images vs. 3D MRIs) restricts the direct
transfer of models, often forcing us to consider only a few MRI slices as
input. To this end, we leverage the 2D-Slice-CNN architecture of Gupta et al.
(2021), which embeds all the MRI slices with 2D encoders (neural networks that
take 2D image input) and combines them via permutation-invariant layers. With
the insight that the pretrained model can serve as the 2D encoder, we
initialize the 2D encoder with ImageNet pretrained weights that outperform
those initialized and trained from scratch on two neuroimaging tasks -- brain
age prediction on the UK Biobank dataset and Alzheimer's disease detection on
the ADNI dataset. Further, we improve the modeling capabilities of 2D-Slice
models by incorporating spatial information through position embeddings, which
can improve the performance in some cases.
Related papers
- Deep Convolutional Neural Networks on Multiclass Classification of Three-Dimensional Brain Images for Parkinson's Disease Stage Prediction [2.931680194227131]
We developed a model capable of accurately predicting Parkinson's disease stages.
We used the entire three-dimensional (3D) brain images as input.
We incorporated an attention mechanism to account for the varying importance of different slices in the prediction process.
arXiv Detail & Related papers (2024-10-31T05:40:08Z) - Brain3D: Generating 3D Objects from fMRI [76.41771117405973]
We design a novel 3D object representation learning method, Brain3D, that takes as input the fMRI data of a subject.
We show that our model captures the distinct functionalities of each region of human vision system.
Preliminary evaluations indicate that Brain3D can successfully identify the disordered brain regions in simulated scenarios.
arXiv Detail & Related papers (2024-05-24T06:06:11Z) - Interpretable 2D Vision Models for 3D Medical Images [47.75089895500738]
This study proposes a simple approach of adapting 2D networks with an intermediate feature representation for processing 3D images.
We show on all 3D MedMNIST datasets as benchmark and two real-world datasets consisting of several hundred high-resolution CT or MRI scans that our approach performs on par with existing methods.
arXiv Detail & Related papers (2023-07-13T08:27:09Z) - Video Pretraining Advances 3D Deep Learning on Chest CT Tasks [63.879848037679224]
Pretraining on large natural image classification datasets has aided model development on data-scarce 2D medical tasks.
These 2D models have been surpassed by 3D models on 3D computer vision benchmarks.
We show video pretraining for 3D models can enable higher performance on smaller datasets for 3D medical tasks.
arXiv Detail & Related papers (2023-04-02T14:46:58Z) - HoloDiffusion: Training a 3D Diffusion Model using 2D Images [71.1144397510333]
We introduce a new diffusion setup that can be trained, end-to-end, with only posed 2D images for supervision.
We show that our diffusion models are scalable, train robustly, and are competitive in terms of sample quality and fidelity to existing approaches for 3D generative modeling.
arXiv Detail & Related papers (2023-03-29T07:35:56Z) - Learning 3D Representations from 2D Pre-trained Models via
Image-to-Point Masked Autoencoders [52.91248611338202]
We propose an alternative to obtain superior 3D representations from 2D pre-trained models via Image-to-Point Masked Autoencoders, named as I2P-MAE.
By self-supervised pre-training, we leverage the well learned 2D knowledge to guide 3D masked autoencoding.
I2P-MAE attains the state-of-the-art 90.11% accuracy, +3.68% to the second-best, demonstrating superior transferable capacity.
arXiv Detail & Related papers (2022-12-13T17:59:20Z) - Decomposing 3D Neuroimaging into 2+1D Processing for Schizophrenia
Recognition [25.80846093248797]
We propose to process the 3D data by a 2+1D framework so that we can exploit the powerful deep 2D Convolutional Neural Network (CNN) networks pre-trained on the huge ImageNet dataset for 3D neuroimaging recognition.
Specifically, 3D volumes of Magnetic Resonance Imaging (MRI) metrics are decomposed to 2D slices according to neighboring voxel positions.
Global pooling is applied to remove redundant information as the activation patterns are sparsely distributed over feature maps.
Channel-wise and slice-wise convolutions are proposed to aggregate the contextual information in the third dimension unprocessed by the 2D CNN model.
arXiv Detail & Related papers (2022-11-21T15:22:59Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - Improved Brain Age Estimation with Slice-based Set Networks [18.272915375351914]
We propose a new architecture for BrainAGE prediction.
The proposed architecture works by encoding each 2D slice in an MRI with a deep 2D-CNN model.
Next, it combines the information from these 2D-slice encodings using set networks or permutation invariant layers.
Experiments on the BrainAGE prediction problem, using the UK Biobank dataset, showed that the model with the permutation invariant layers trains faster and provides better predictions compared to other state-of-the-art approaches.
arXiv Detail & Related papers (2021-02-08T18:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.