MARL: Multimodal Attentional Representation Learning for Disease
Prediction
- URL: http://arxiv.org/abs/2105.00310v1
- Date: Sat, 1 May 2021 17:47:40 GMT
- Title: MARL: Multimodal Attentional Representation Learning for Disease
Prediction
- Authors: Ali Hamdi, Amr Aboeleneen, Khaled Shaban
- Abstract summary: Existing learning models often utilise CT-scan images to predict lung diseases.
These models are posed by high uncertainties that affect lung segmentation and visual feature learning.
We introduce MARL, a novel Multimodal Attentional Representation Learning model architecture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Existing learning models often utilise CT-scan images to predict lung
diseases. These models are posed by high uncertainties that affect lung
segmentation and visual feature learning. We introduce MARL, a novel Multimodal
Attentional Representation Learning model architecture that learns useful
features from multimodal data under uncertainty. We feed the proposed model
with both the lung CT-scan images and their perspective historical patients'
biological records collected over times. Such rich data offers to analyse both
spatial and temporal aspects of the disease. MARL employs Fuzzy-based image
spatial segmentation to overcome uncertainties in CT-scan images. We then
utilise a pre-trained Convolutional Neural Network (CNN) to learn visual
representation vectors from images. We augment patients' data with statistical
features from the segmented images. We develop a Long Short-Term Memory (LSTM)
network to represent the augmented data and learn sequential patterns of
disease progressions. Finally, we inject both CNN and LSTM feature vectors to
an attention layer to help focus on the best learning features. We evaluated
MARL on regression of lung disease progression and status classification. MARL
outperforms state-of-the-art CNN architectures, such as EfficientNet and
DenseNet, and baseline prediction models. It achieves a 91% R^2 score, which is
higher than the other models by a range of 8% to 27%. Also, MARL achieves 97%
and 92% accuracy for binary and multi-class classification, respectively. MARL
improves the accuracy of state-of-the-art CNN models with a range of 19% to
57%. The results show that combining spatial and sequential temporal features
produces better discriminative feature.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - A Multi-Task Cross-Task Learning Architecture for Ad-hoc Uncertainty
Estimation in 3D Cardiac MRI Image Segmentation [0.0]
We present a Multi-task Cross-task learning consistency approach to enforce the correlation between the pixel-level (segmentation) and the geometric-level (distance map) tasks.
Our study further showcases the potential of our model to flag low-quality segmentation from a given model.
arXiv Detail & Related papers (2021-09-16T03:53:24Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Representation Learning of Histopathology Images using Graph Neural
Networks [12.427740549056288]
We propose a two-stage framework for WSI representation learning.
We sample relevant patches using a color-based method and use graph neural networks to learn relations among sampled patches to aggregate the image information into a single vector representation.
We demonstrate the performance of our approach for discriminating two sub-types of lung cancers, Lung Adenocarcinoma (LUAD) & Lung Squamous Cell Carcinoma (LUSC)
arXiv Detail & Related papers (2020-04-16T00:09:20Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.