Tumor Location-weighted MRI-Report Contrastive Learning: A Framework for Improving the Explainability of Pediatric Brain Tumor Diagnosis
- URL: http://arxiv.org/abs/2411.00609v1
- Date: Fri, 01 Nov 2024 14:14:17 GMT
- Title: Tumor Location-weighted MRI-Report Contrastive Learning: A Framework for Improving the Explainability of Pediatric Brain Tumor Diagnosis
- Authors: Sara Ketabi, Matthias W. Wagner, Cynthia Hawkins, Uri Tabori, Birgit Betina Ertl-Wagner, Farzad Khalvati,
- Abstract summary: We train a multimodal CL architecture on 3D brain MRI scans and radiology reports to learn informative MRI representations.
We then apply the learnt image representations to improve explainability and performance of genetic marker classification of pediatric Low-grade Glioma.
- Score: 0.0
- License:
- Abstract: Despite the promising performance of convolutional neural networks (CNNs) in brain tumor diagnosis from magnetic resonance imaging (MRI), their integration into the clinical workflow has been limited. That is mainly due to the fact that the features contributing to a model's prediction are unclear to radiologists and hence, clinically irrelevant, i.e., lack of explainability. As the invaluable sources of radiologists' knowledge and expertise, radiology reports can be integrated with MRI in a contrastive learning (CL) framework, enabling learning from image-report associations, to improve CNN explainability. In this work, we train a multimodal CL architecture on 3D brain MRI scans and radiology reports to learn informative MRI representations. Furthermore, we integrate tumor location, salient to several brain tumor analysis tasks, into this framework to improve its generalizability. We then apply the learnt image representations to improve explainability and performance of genetic marker classification of pediatric Low-grade Glioma, the most prevalent brain tumor in children, as a downstream task. Our results indicate a Dice score of 31.1% between the model's attention maps and manual tumor segmentation (as an explainability measure) with test classification performance of 87.7%, significantly outperforming the baselines. These enhancements can build trust in our model among radiologists, facilitating its integration into clinical practices for more efficient tumor diagnosis.
Related papers
- An Integrated Deep Learning Framework for Effective Brain Tumor Localization, Segmentation, and Classification from Magnetic Resonance Images [0.0]
Tumors in the brain result from abnormal cell growth within the brain tissue, arising from various types of brain cells.
Our research proposes DL frameworks for localizing, segmenting, and classifying the grade of these gliomas from MRI images to solve this critical issue.
Our proposed models demonstrated promising results, with the potential to advance medical AI by enabling early diagnosis and providing more accurate treatment options for patients.
arXiv Detail & Related papers (2024-09-25T18:38:57Z) - Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Hybrid Multihead Attentive Unet-3D for Brain Tumor Segmentation [0.0]
Brain tumor segmentation is a critical task in medical image analysis, aiding in the diagnosis and treatment planning of brain tumor patients.
Various deep learning-based techniques have made significant progress in this field, however, they still face limitations in terms of accuracy due to the complex and variable nature of brain tumor morphology.
We propose a novel Hybrid Multihead Attentive U-Net architecture, to address the challenges in accurate brain tumor segmentation.
arXiv Detail & Related papers (2024-05-22T02:46:26Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - UniBrain: Universal Brain MRI Diagnosis with Hierarchical
Knowledge-enhanced Pre-training [66.16134293168535]
We propose a hierarchical knowledge-enhanced pre-training framework for the universal brain MRI diagnosis, termed as UniBrain.
Specifically, UniBrain leverages a large-scale dataset of 24,770 imaging-report pairs from routine diagnostics.
arXiv Detail & Related papers (2023-09-13T09:22:49Z) - Post-Hoc Explainability of BI-RADS Descriptors in a Multi-task Framework
for Breast Cancer Detection and Segmentation [48.08423125835335]
MT-BI-RADS is a novel explainable deep learning approach for tumor detection in Breast Ultrasound (BUS) images.
It offers three levels of explanations to enable radiologists to comprehend the decision-making process in predicting tumor malignancy.
arXiv Detail & Related papers (2023-08-27T22:07:42Z) - Brain tumor multi classification and segmentation in MRI images using
deep learning [3.1248717814228923]
The classification model is based on the EfficientNetB1 architecture and is trained to classify images into four classes: meningioma, glioma, pituitary adenoma, and no tumor.
The segmentation model is based on the U-Net architecture and is trained to accurately segment the tumor from the MRI images.
arXiv Detail & Related papers (2023-04-20T01:32:55Z) - Video4MRI: An Empirical Study on Brain Magnetic Resonance Image
Analytics with CNN-based Video Classification Frameworks [60.42012344842292]
3D CNN-based models dominate the field of magnetic resonance image (MRI) analytics.
In this paper, four datasets of Alzheimer's and Parkinson's disease recognition are utilized in experiments.
In terms of efficiency, the video framework performs better than 3D-CNN models by 5% - 11% with 50% - 66% less trainable parameters.
arXiv Detail & Related papers (2023-02-24T15:26:31Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - MAG-Net: Mutli-task attention guided network for brain tumor
segmentation and classification [0.9176056742068814]
This paper proposes multi-task attention guided encoder-decoder network (MAG-Net) to classify and segment the brain tumor regions using MRI images.
The model achieved promising results as compared to existing state-of-the-art models.
arXiv Detail & Related papers (2021-07-26T16:51:00Z) - 3D AGSE-VNet: An Automatic Brain Tumor MRI Data Segmentation Framework [3.0261170901794308]
Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent.
The main method of acquiring brain tumors in the clinic is MRI of brain tumor regions from multi-modal MRI scan images.
We propose an automatic brain tumor MRI data segmentation framework which is called AGSE-VNet.
arXiv Detail & Related papers (2021-07-26T09:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.