A Deep Learning Framework for Thyroid Nodule Segmentation and Malignancy Classification from Ultrasound Images
- URL: http://arxiv.org/abs/2511.11937v1
- Date: Fri, 14 Nov 2025 23:23:24 GMT
- Title: A Deep Learning Framework for Thyroid Nodule Segmentation and Malignancy Classification from Ultrasound Images
- Authors: Omar Abdelrazik, Mohamed Elsayed, Noorul Wahab, Nasir Rajpoot, Adam Shephard,
- Abstract summary: We propose a fully automated, two-stage framework for interpretable malignancy prediction.<n>Our method achieves interpretability by forcing the model to focus only on clinically relevant regions.<n>This is the first fully automated end-to-end pipeline for both detecting thyroid nodules on ultrasound images and predicting their malignancy.
- Score: 2.875000842489767
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Ultrasound-based risk stratification of thyroid nodules is a critical clinical task, but it suffers from high inter-observer variability. While many deep learning (DL) models function as "black boxes," we propose a fully automated, two-stage framework for interpretable malignancy prediction. Our method achieves interpretability by forcing the model to focus only on clinically relevant regions. First, a TransUNet model automatically segments the thyroid nodule. The resulting mask is then used to create a region of interest around the nodule, and this localised image is fed directly into a ResNet-18 classifier. We evaluated our framework using 5-fold cross-validation on a clinical dataset of 349 images, where it achieved a high F1-score of 0.852 for predicting malignancy. To validate its performance, we compared it against a strong baseline using a Random Forest classifier with hand-crafted morphological features, which achieved an F1-score of 0.829. The superior performance of our DL framework suggests that the implicit visual features learned from the localised nodule are more predictive than explicit shape features alone. This is the first fully automated end-to-end pipeline for both detecting thyroid nodules on ultrasound images and predicting their malignancy.
Related papers
- Improved cystic hygroma detection from prenatal imaging using ultrasound-specific self-supervised representation learning [0.18058404137575482]
Cystic hygroma is a high-risk prenatal ultrasound finding that portends high rates of chromosomal abnormalities, structural malformations, and adverse pregnancy outcomes.<n>This study assesses whether ultrasound-specific self-supervised pretraining can facilitate accurate, robust deep learning detection of cystic hygroma in first-trimester ultrasound images.
arXiv Detail & Related papers (2025-12-28T00:07:26Z) - Self-Supervised Ultrasound Representation Learning for Renal Anomaly Prediction in Prenatal Imaging [0.19544534628180868]
We assessed the performance of a self-supervised ultrasound foundation model for automated fetal renal anomaly classification.<n>Models were compared with a DenseNet-169 convolutional baseline using cross-validation and an independent test set.<n>The largest gains were observed in the multi-class setting, where the improvement in AUC was 16.28% and 46.15% in F1-score.
arXiv Detail & Related papers (2025-12-15T15:28:02Z) - Hide-and-Seek Attribution: Weakly Supervised Segmentation of Vertebral Metastases in CT [68.09387763135236]
We introduce a weakly supervised method trained solely on vertebra-level healthy/malignant labels, without any lesion masks.<n>We achieve strong blastic/lytic performance despite no mask supervision.
arXiv Detail & Related papers (2025-12-07T14:03:28Z) - Accurate Thyroid Cancer Classification using a Novel Binary Pattern Driven Local Discrete Cosine Transform Descriptor [3.663197678470621]
We develop a new CAD system for accurate thyroid cancer classification with emphasis on feature extraction.<n>We term our novel descriptor as Binary Pattern Driven Local Discrete Cosine Transform (BPD-LDCT)
arXiv Detail & Related papers (2025-09-19T19:54:04Z) - STACT-Time: Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification [2.510842391292067]
Thyroid cancer is among the most common cancers in the United States.<n>Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips.<n>We propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model.
arXiv Detail & Related papers (2025-06-22T21:14:04Z) - Vision-Language Model-Based Semantic-Guided Imaging Biomarker for Lung Nodule Malignancy Prediction [0.38698178563798113]
This research aims to integrate semantic features derived from radiologists' assessments of nodules, guiding the model to learn clinically relevant, robust, and explainable imaging features for predicting lung cancer.<n>We fine-tuned a pretrained Contrastive Language-Image Pretraining model with a parameter-efficient fine-tuning approach to align imaging and semantic text features and predict the one-year lung cancer diagnosis.
arXiv Detail & Related papers (2025-04-30T06:11:34Z) - ThyroidEffi 1.0: A Cost-Effective System for High-Performance Multi-Class Thyroid Carcinoma Classification [0.0]
We develop and validate a deep learning system for multi-class thyroid FNAB image classification.<n>Benign, Indeterminate/Suspicious, and Malignant are three key categories directly guiding post-biopsy treatment.<n>The system processed 1000 cases in 30 seconds, demonstrating feasibility on widely accessible hardware.
arXiv Detail & Related papers (2025-04-19T02:13:07Z) - Certification of Deep Learning Models for Medical Image Segmentation [44.177565298565966]
We present for the first time a certified segmentation baseline for medical imaging based on randomized smoothing and diffusion models.
Our results show that leveraging the power of denoising diffusion probabilistic models helps us overcome the limits of randomized smoothing.
arXiv Detail & Related papers (2023-10-05T16:40:33Z) - Tissue Classification During Needle Insertion Using Self-Supervised
Contrastive Learning and Optical Coherence Tomography [53.38589633687604]
We propose a deep neural network that classifies the tissues from the phase and intensity data of complex OCT signals acquired at the needle tip.
We show that with 10% of the training set, our proposed pretraining strategy helps the model achieve an F1 score of 0.84 whereas the model achieves an F1 score of 0.60 without it.
arXiv Detail & Related papers (2023-04-26T14:11:04Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - Less is More: Adaptive Curriculum Learning for Thyroid Nodule Diagnosis [50.231954872304314]
We propose an Adaptive Curriculum Learning framework, which adaptively discovers and discards the samples with inconsistent labels.
We also contribute TNCD: a Thyroid Nodule Classification dataset.
arXiv Detail & Related papers (2022-07-02T11:50:02Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - Malignancy Prediction and Lesion Identification from Clinical
Dermatological Images [65.1629311281062]
We consider machine-learning-based malignancy prediction and lesion identification from clinical dermatological images.
We first identify all lesions present in the image regardless of sub-type or likelihood of malignancy, then it estimates their likelihood of malignancy, and through aggregation, it also generates an image-level likelihood of malignancy.
arXiv Detail & Related papers (2021-04-02T20:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.