MM-SurvNet: Deep Learning-Based Survival Risk Stratification in Breast
Cancer Through Multimodal Data Fusion
- URL: http://arxiv.org/abs/2402.11788v1
- Date: Mon, 19 Feb 2024 02:31:36 GMT
- Title: MM-SurvNet: Deep Learning-Based Survival Risk Stratification in Breast
Cancer Through Multimodal Data Fusion
- Authors: Raktim Kumar Mondol, Ewan K.A. Millar, Arcot Sowmya, Erik Meijering
- Abstract summary: We propose a novel deep learning approach for breast cancer survival risk stratification.
We employ vision transformers, specifically the MaxViT model, for image feature extraction, and self-attention to capture intricate image relationships at the patient level.
A dual cross-attention mechanism fuses these features with genetic data, while clinical data is incorporated at the final layer to enhance predictive accuracy.
- Score: 18.395418853966266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Survival risk stratification is an important step in clinical decision making
for breast cancer management. We propose a novel deep learning approach for
this purpose by integrating histopathological imaging, genetic and clinical
data. It employs vision transformers, specifically the MaxViT model, for image
feature extraction, and self-attention to capture intricate image relationships
at the patient level. A dual cross-attention mechanism fuses these features
with genetic data, while clinical data is incorporated at the final layer to
enhance predictive accuracy. Experiments on the public TCGA-BRCA dataset show
that our model, trained using the negative log likelihood loss function, can
achieve superior performance with a mean C-index of 0.64, surpassing existing
methods. This advancement facilitates tailored treatment strategies,
potentially leading to improved patient outcomes.
Related papers
- Integrating Deep Learning with Fundus and Optical Coherence Tomography for Cardiovascular Disease Prediction [47.7045293755736]
Early identification of patients at risk of cardiovascular diseases (CVD) is crucial for effective preventive care, reducing healthcare burden, and improving patients' quality of life.
This study demonstrates the potential of retinal optical coherence tomography ( OCT) imaging combined with fundus photographs for identifying future adverse cardiac events.
We propose a novel binary classification network based on a Multi-channel Variational Autoencoder (MCVAE), which learns a latent embedding of patients' fundus and OCT images to classify individuals into two groups: those likely to develop CVD in the future and those who are not.
arXiv Detail & Related papers (2024-10-18T12:37:51Z) - Multi-modal Medical Image Fusion For Non-Small Cell Lung Cancer Classification [7.002657345547741]
Non-small cell lung cancer (NSCLC) is a predominant cause of cancer mortality worldwide.
In this paper, we introduce an innovative integration of multi-modal data, synthesizing fused medical imaging (CT and PET scans) with clinical health records and genomic data.
Our research surpasses existing approaches, as evidenced by a substantial enhancement in NSCLC detection and classification precision.
arXiv Detail & Related papers (2024-09-27T12:59:29Z) - M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction [24.323961146023358]
We propose a neural network model called M2EF-NNs for accurate cancer survival prediction.
To capture global information in the images, we use a pre-trained Vision Transformer (ViT) model.
We are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction.
arXiv Detail & Related papers (2024-08-08T02:31:04Z) - Lung-CADex: Fully automatic Zero-Shot Detection and Classification of Lung Nodules in Thoracic CT Images [45.29301790646322]
Computer-aided diagnosis can help with early lung nodul detection and facilitate subsequent nodule characterization.
We propose CADe, for segmenting lung nodules in a zero-shot manner using a variant of the Segment Anything Model called MedSAM.
We also propose, CADx, a method for the nodule characterization as benign/malignant by making a gallery of radiomic features and aligning image-feature pairs through contrastive learning.
arXiv Detail & Related papers (2024-07-02T19:30:25Z) - Embedding-based Multimodal Learning on Pan-Squamous Cell Carcinomas for Improved Survival Outcomes [0.0]
PARADIGM is a framework that learns from multimodal, heterogeneous datasets to improve clinical outcome prediction.
We train GNNs on pan-Squamous Cell Carcinomas and validate our approach on Moffitt Cancer Center lung SCC data.
Our solution aims to understand the patient's circumstances comprehensively, offering insights on heterogeneous data integration and the benefits of converging maximum data views.
arXiv Detail & Related papers (2024-06-11T22:19:14Z) - SELECTOR: Heterogeneous graph network with convolutional masked autoencoder for multimodal robust prediction of cancer survival [8.403756148610269]
Multimodal prediction of cancer patient survival offers a more comprehensive and precise approach.
This paper introduces SELECTOR, a heterogeneous graph-aware network based on convolutional mask encoders.
Our method significantly outperforms state-of-the-art methods in both modality-missing and intra-modality information-confirmed cases.
arXiv Detail & Related papers (2024-03-14T11:23:39Z) - BioFusionNet: Deep Learning-Based Survival Risk Stratification in ER+ Breast Cancer Through Multifeature and Multimodal Data Fusion [16.83901927767791]
We present BioFusionNet, a deep learning framework that fuses image-derived features with genetic and clinical data to obtain a holistic profile.
Our model achieves a mean concordance index of 0.77 and a time-dependent area under the curve of 0.84, outperforming state-of-the-art methods.
arXiv Detail & Related papers (2024-02-16T14:19:33Z) - Pathology-and-genomics Multimodal Transformer for Survival Outcome
Prediction [43.1748594898772]
We propose a multimodal transformer (PathOmics) integrating pathology and genomics insights into colon-related cancer survival prediction.
We emphasize the unsupervised pretraining to capture the intrinsic interaction between tissue microenvironments in gigapixel whole slide images.
We evaluate our approach on both TCGA colon and rectum cancer cohorts, showing that the proposed approach is competitive and outperforms state-of-the-art studies.
arXiv Detail & Related papers (2023-07-22T00:59:26Z) - Automatic diagnosis of knee osteoarthritis severity using Swin
transformer [55.01037422579516]
Knee osteoarthritis (KOA) is a widespread condition that can cause chronic pain and stiffness in the knee joint.
We propose an automated approach that employs the Swin Transformer to predict the severity of KOA.
arXiv Detail & Related papers (2023-07-10T09:49:30Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.