Multimodal Deep Learning for Phyllodes Tumor Classification from Ultrasound and Clinical Data
- URL: http://arxiv.org/abs/2509.00213v2
- Date: Thu, 25 Sep 2025 14:00:16 GMT
- Title: Multimodal Deep Learning for Phyllodes Tumor Classification from Ultrasound and Clinical Data
- Authors: Farhan Fuad Abir, Abigail Elliott Daly, Kyle Anderman, Tolga Ozmen, Laura J. Brattain,
- Abstract summary: Phyllodes tumors (PTs) are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas.<n>We propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy.
- Score: 0.29981448312652675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Phyllodes tumors (PTs) are rare fibroepithelial breast lesions that are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas. This often leads to unnecessary surgical excisions. To address this, we propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy. We developed a dual-branch neural network that extracts and fuses features from ultrasound images and patient metadata from 81 subjects with confirmed PTs. Class-aware sampling and subject-stratified 5-fold cross-validation were applied to prevent class imbalance and data leakage. The results show that our proposed multimodal method outperforms unimodal baselines in classifying benign versus borderline/malignant PTs. Among six image encoders, ConvNeXt and ResNet18 achieved the best performance in the multimodal setting, with AUC-ROC scores of 0.9427 and 0.9349, and F1-scores of 0.6720 and 0.7294, respectively. This study demonstrates the potential of multimodal AI to serve as a non-invasive diagnostic tool, reducing unnecessary biopsies and improving clinical decision-making in breast tumor management.
Related papers
- Efficient endometrial carcinoma screening via cross-modal synthesis and gradient distillation [15.277910275783187]
Early detection of myometrial invasion is critical for the staging and life-saving management of endometrial carcinoma (EC)<n>Here we present an automated, highly efficient two-stage deep learning framework that resolves both data and computational bottlenecks in EC screening.<n>Our model achieves a sensitivity of 99.5%, a specificity of 97.2%, and an area under the curve of 0.987 at a minimal computational cost.
arXiv Detail & Related papers (2026-02-23T13:22:25Z) - Prompt-Free SAM-Based Multi-Task Framework for Breast Ultrasound Lesion Segmentation and Classification [0.4083182125683813]
This study presents a multi-task deep learning framework that jointly performs lesion segmentation and diagnostic classification.<n>Our approach employs a prompt-free, fully supervised adaptation where high-dimensional SAM features are decoded through either a lightweight convolutional head or a UNet-inspired decoder for pixel-wise segmentation.<n> Experiments on the PRECISE 2025 breast ultrasound dataset, split per class into 80 percent training and 20 percent testing, show that the proposed method achieves a Dice Similarity Coefficient (DSC) of 0.887 and an accuracy of 92.3 percent.
arXiv Detail & Related papers (2026-01-09T03:02:41Z) - One-shot synthesis of rare gastrointestinal lesions improves diagnostic accuracy and clinical training [45.49415063761575]
EndoRare is a one-shot, retraining-free generative framework that synthesizes diverse, high-fidelity lesion exemplars from a single reference image.<n>We validated the framework across four rare pathologies.<n>These results establish a practical, data-efficient pathway to bridge the rare-disease gap in both computer-aided diagnostics and clinical education.
arXiv Detail & Related papers (2025-12-30T15:07:09Z) - Neural Discrete Representation Learning for Sparse-View CBCT Reconstruction: From Algorithm Design to Prospective Multicenter Clinical Evaluation [64.42236775544579]
Cone beam computed tomography (CBCT)-guided puncture has become an established approach for diagnosing and treating thoracic tumours.<n>DeepPriorCBCT is a three-stage deep learning framework that achieves diagnostic-grade reconstruction using only one-sixth of the conventional radiation dose.
arXiv Detail & Related papers (2025-11-30T12:45:02Z) - Large-Scale Pre-training Enables Multimodal AI Differentiation of Radiation Necrosis from Brain Metastasis Progression on Routine MRI [3.291383664051985]
Differentiating radiation necrosis from tumor progression after radiosurgery is a critical challenge in brain metastases.<n> Conventional supervised deep learning approaches are constrained by scarce biopsy-confirmed training data.<n>Self-supervised learning overcomes this by leveraging the growing availability of largescale unlabeled brain metastases imaging datasets.
arXiv Detail & Related papers (2025-11-22T22:44:50Z) - Fusion-Based Brain Tumor Classification Using Deep Learning and Explainable AI, and Rule-Based Reasoning [0.0]
This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs)<n>The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol.<n>The ensemble achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%.
arXiv Detail & Related papers (2025-08-09T08:46:36Z) - MSWAL: 3D Multi-class Segmentation of Whole Abdominal Lesions Dataset [41.69818086021188]
We introduce MSWAL, the first 3D Multi-class of the Whole Abdominal Lesions dataset.<n>MSWAL broadens the coverage of various common lesion types, such as gallstones, kidney stones, liver tumors, kidney tumors, pancreatic cancer, liver cysts, and kidney cysts.<n>We propose Inception nnU-Net, a novel segmentation framework that effectively integrates an Inception module with the nnU-Net architecture to extract information from different fields.
arXiv Detail & Related papers (2025-03-17T06:31:25Z) - TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs [49.69047720285225]
We propose a novel topological approach that explicitly extracts multi-scale topological structures to better approximate breast parenchymal structures.
We empirically validate emphTopoTxR using the VICTRE phantom breast dataset.
Our qualitative and quantitative analyses suggest differential topological behavior of breast tissue in treatment-na"ive imaging.
arXiv Detail & Related papers (2024-11-05T19:35:10Z) - Analysis of the BraTS 2023 Intracranial Meningioma Segmentation Challenge [44.76736949127792]
We describe the design and results from the BraTS 2023 Intracranial Meningioma Challenge.<n>The BraTS Meningioma Challenge differed from prior BraTS Glioma challenges in that it focused on meningiomas.<n>The top ranked team had a lesion-wise median dice similarity coefficient (DSC) of 0.976, 0.976, and 0.964 for enhancing tumor, tumor core, and whole tumor.
arXiv Detail & Related papers (2024-05-16T03:23:57Z) - Radiomics Boosts Deep Learning Model for IPMN Classification [3.4659499358648675]
Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer.
In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from MRI scans.
arXiv Detail & Related papers (2023-09-11T22:41:52Z) - Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT
by Integrating Neural Distance and Texture-Aware Transformer [37.55853672333369]
This paper proposes a novel learnable neural distance that describes the precise relationship between the tumor and vessels in CT images of different patients.
The developed risk marker was the strongest predictor of overall survival among preoperative factors.
arXiv Detail & Related papers (2023-08-01T12:46:02Z) - Domain Transfer Through Image-to-Image Translation for Uncertainty-Aware Prostate Cancer Classification [42.75911994044675]
We present a novel approach for unpaired image-to-image translation of prostate MRIs and an uncertainty-aware training approach for classifying clinically significant PCa.
Our approach involves a novel pipeline for translating unpaired 3.0T multi-parametric prostate MRIs to 1.5T, thereby augmenting the available training data.
Our experiments demonstrate that the proposed method significantly improves the Area Under ROC Curve (AUC) by over 20% compared to the previous work.
arXiv Detail & Related papers (2023-07-02T05:26:54Z) - Translating automated brain tumour phenotyping to clinical neuroimaging [0.4199844472131921]
We use state-of-the-art methods to quantify the comparative fidelity of automated tumour segmentation models.
Deep learning segmentation models characterize tumours well when missing data and can even detect enhancing tissue without the use of contrast.
arXiv Detail & Related papers (2022-06-13T12:58:54Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Deep Orthogonal Fusion: Multimodal Prognostic Biomarker Discovery
Integrating Radiology, Pathology, Genomic, and Clinical Data [0.32622301272834525]
We predict the overall survival (OS) of glioma patients from diverse multimodal data with a Deep Orthogonal Fusion model.
The model learns to combine information from MRI exams, biopsy-based modalities, and clinical variables into a comprehensive multimodal risk score.
It significantly stratifies glioma patients by OS within clinical subsets, adding further granularity to prognostic clinical grading and molecular subtyping.
arXiv Detail & Related papers (2021-07-01T17:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.