Reducing Overtreatment of Indeterminate Thyroid Nodules Using a Multimodal Deep Learning Model
- URL: http://arxiv.org/abs/2409.19171v1
- Date: Fri, 27 Sep 2024 22:38:03 GMT
- Title: Reducing Overtreatment of Indeterminate Thyroid Nodules Using a Multimodal Deep Learning Model
- Authors: Shreeram Athreya, Andrew Melehy, Sujit Silas Armstrong Suthahar, Vedrana Ivezić, Ashwath Radhachandran, Vivek Sant, Chace Moleta, Henry Zheng, Maitraya Patel, Rinat Masamed, Corey W. Arnold, William Speier,
- Abstract summary: Molecular testing (MT) classifies cytologically indeterminate thyroid nodules as benign or malignant with high sensitivity but low positive predictive value (PPV)
We address this limitation by applying attention multiple instance learning (AMIL) to US images.
A multi-modal deep learning AMIL model was developed, combining US images and MT to classify the nodules as benign or malignant.
- Score: 3.4812887520451117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective: Molecular testing (MT) classifies cytologically indeterminate thyroid nodules as benign or malignant with high sensitivity but low positive predictive value (PPV), only using molecular profiles, ignoring ultrasound (US) imaging and biopsy. We address this limitation by applying attention multiple instance learning (AMIL) to US images. Methods: We retrospectively reviewed 333 patients with indeterminate thyroid nodules at UCLA medical center (259 benign, 74 malignant). A multi-modal deep learning AMIL model was developed, combining US images and MT to classify the nodules as benign or malignant and enhance the malignancy risk stratification of MT. Results: The final AMIL model matched MT sensitivity (0.946) while significantly improving PPV (0.477 vs 0.448 for MT alone), indicating fewer false positives while maintaining high sensitivity. Conclusion: Our approach reduces false positives compared to MT while maintaining the same ability to identify positive cases, potentially reducing unnecessary benign thyroid resections in patients with indeterminate nodules.
Related papers
- Towards Label-Free Brain Tumor Segmentation: Unsupervised Learning with Multimodal MRI [7.144319861722029]
Unsupervised anomaly detection (UAD) presents a complementary alternative to supervised learning for brain tumor segmentation in MRI.<n>We propose a novel Multimodal Vision Transformer Autoencoder (MViT-AE) trained exclusively on healthy brain MRIs to detect and localize tumors.<n>Our method achieves clinically meaningful tumor localization, with lesion-wise Dice Similarity Coefficient of 0.437 (Whole Tumor), 0.316 (Tumor Core), and 0.350 (Enhancing Tumor) on the test set, and an anomaly Detection Rate of 89.4% on the validation set.
arXiv Detail & Related papers (2025-10-17T14:26:30Z) - Multimodal Deep Learning for Phyllodes Tumor Classification from Ultrasound and Clinical Data [0.29981448312652675]
Phyllodes tumors (PTs) are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas.<n>We propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy.
arXiv Detail & Related papers (2025-08-29T19:54:11Z) - Systematic Review of Pituitary Gland and Pituitary Adenoma Automatic Segmentation Techniques in Magnetic Resonance Imaging [40.16592757754337]
We reviewed 34 studies that employed automatic and semi-automatic segmentation methods.<n>The majority of reviewed studies utilized deep learning approaches, with U-Net-based models being the most prevalent.<n>Further improvements are needed to achieve consistently good performance in small structures like the normal pituitary gland.
arXiv Detail & Related papers (2025-06-24T17:05:01Z) - STACT-Time: Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification [2.510842391292067]
Thyroid cancer is among the most common cancers in the United States.<n>Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips.<n>We propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model.
arXiv Detail & Related papers (2025-06-22T21:14:04Z) - An ensemble deep learning approach to detect tumors on Mohs micrographic surgery slides [0.0]
The objective of this study is to develop a deep learning model to detect basal cell carcinoma (BCC) and artifacts on Mohs slides.
We present an AI system that can detect tumors and non-tumors in Mohs slides with high success.
arXiv Detail & Related papers (2025-04-07T16:05:42Z) - Leveraging Semantic Asymmetry for Precise Gross Tumor Volume Segmentation of Nasopharyngeal Carcinoma in Planning CT [12.199850355388214]
In the radiation therapy of nasopharyngeal carcinoma (NPC), clinicians typically delineate the gross tumor volume (GTV) using non-contrast planning computed tomography.
The low contrast between tumors and adjacent normal tissues necessitates that radiation oncologists manually delineate the tumors.
We propose a novel approach to directly segment NPC gross tumors on non-contrast planning CT images.
arXiv Detail & Related papers (2024-11-27T12:28:46Z) - Towards Non-invasive and Personalized Management of Breast Cancer Patients from Multiparametric MRI via A Large Mixture-of-Modality-Experts Model [19.252851972152957]
We report a mixture-of-modality-experts model (MOME) that integrates multiparametric MRI information within a unified structure.
MOME demonstrated accurate and robust identification of breast cancer.
It could reduce the need for biopsies in BI-RADS 4 patients with a ratio of 7.3%, classify triple-negative breast cancer with an AUROC of 0.709, and predict pathological complete response to neoadjuvant chemotherapy with an AUROC of 0.694.
arXiv Detail & Related papers (2024-08-08T05:04:13Z) - Boosting Medical Image-based Cancer Detection via Text-guided Supervision from Reports [68.39938936308023]
We propose a novel text-guided learning method to achieve highly accurate cancer detection results.
Our approach can leverage clinical knowledge by large-scale pre-trained VLM to enhance generalization ability.
arXiv Detail & Related papers (2024-05-23T07:03:38Z) - Leveraging Transformers to Improve Breast Cancer Classification and Risk
Assessment with Multi-modal and Longitudinal Data [3.982926115291704]
Multi-modal Transformer (MMT) is a neural network that utilizes mammography and ultrasound synergistically.
MMT tracks temporal tissue changes by comparing current exams to prior imaging.
For 5-year risk prediction, MMT attains an AUROC of 0.826, outperforming prior mammography-based risk models.
arXiv Detail & Related papers (2023-11-06T16:01:42Z) - Style transfer between Microscopy and Magnetic Resonance Imaging via
Generative Adversarial Network in small sample size settings [49.84018914962972]
Cross-modal augmentation of Magnetic Resonance Imaging (MRI) and microscopic imaging based on the same tissue samples is promising.
We tested a method for generating microscopic histological images from MRI scans of the corpus callosum using conditional generative adversarial network (cGAN) architecture.
arXiv Detail & Related papers (2023-10-16T13:58:53Z) - Radiomics Boosts Deep Learning Model for IPMN Classification [3.4659499358648675]
Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer.
In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from MRI scans.
arXiv Detail & Related papers (2023-09-11T22:41:52Z) - Multi-view Contrastive Learning with Additive Margin for Adaptive
Nasopharyngeal Carcinoma Radiotherapy Prediction [7.303184467211488]
We propose a supervised multi-view contrastive learning method with an additive margin.
For each patient, four medical images are considered to form multi-view positive pairs.
In addition, the embedding space is learned by means of contrastive learning.
arXiv Detail & Related papers (2022-10-27T06:21:24Z) - Towards Confident Detection of Prostate Cancer using High Resolution
Micro-ultrasound [7.826781688190151]
Detection of prostate cancer during transrectal ultrasound-guided biopsy is challenging.
Recent advancements in high-frequency ultrasound imaging - micro-ultrasound - have drastically increased the capability of tissue imaging at high resolution.
Our aim is to investigate the development of a robust deep learning model specifically for micro-ultrasound-guided prostate cancer biopsy.
arXiv Detail & Related papers (2022-07-21T14:00:00Z) - Less is More: Adaptive Curriculum Learning for Thyroid Nodule Diagnosis [50.231954872304314]
We propose an Adaptive Curriculum Learning framework, which adaptively discovers and discards the samples with inconsistent labels.
We also contribute TNCD: a Thyroid Nodule Classification dataset.
arXiv Detail & Related papers (2022-07-02T11:50:02Z) - Deep learning-based approach to reveal tumor mutational burden status
from whole slide images across multiple cancer types [41.61294299606317]
Tumor mutational burden (TMB) is a potential genomic biomarker of immunotherapy.
TMB detected through whole exome sequencing lacks clinical penetration in low-resource settings.
In this study, we proposed a multi-scale deep learning framework to address the detection of TMB status from routinely used whole slide images.
arXiv Detail & Related papers (2022-04-07T07:02:32Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Learned super resolution ultrasound for improved breast lesion
characterization [52.77024349608834]
Super resolution ultrasound localization microscopy enables imaging of the microvasculature at the capillary level.
In this work we use a deep neural network architecture that makes effective use of signal structure to address these challenges.
By leveraging our trained network, the microvasculature structure is recovered in a short time, without prior PSF knowledge, and without requiring separability of the UCAs.
arXiv Detail & Related papers (2021-07-12T09:04:20Z) - Melanoma Diagnosis with Spatio-Temporal Feature Learning on Sequential
Dermoscopic Images [40.743870665742975]
Existing dermatologists for automated melanoma diagnosis are based on single-time point images of lesions.
We propose an automated framework for melanoma diagnosis using sequential dermoscopic images.
arXiv Detail & Related papers (2020-06-19T04:08:22Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z) - Stan: Small tumor-aware network for breast ultrasound image segmentation [68.8204255655161]
We propose a novel deep learning architecture called Small Tumor-Aware Network (STAN) to improve the performance of segmenting tumors with different size.
The proposed approach outperformed the state-of-the-art approaches in segmenting small breast tumors.
arXiv Detail & Related papers (2020-02-03T22:25:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.