An Automatic Detection Method for Hematoma Features in Placental Abruption Ultrasound Images Based on Few-Shot Learning
- URL: http://arxiv.org/abs/2510.21495v1
- Date: Fri, 24 Oct 2025 14:20:34 GMT
- Title: An Automatic Detection Method for Hematoma Features in Placental Abruption Ultrasound Images Based on Few-Shot Learning
- Authors: Xiaoqing Liu, Jitai Han, Hua Yan, Peng Li, Sida Tang, Ying Li, Kaiwen Zhang, Min Yu,
- Abstract summary: Placental abruption is a severe complication during pregnancy, and its early accurate diagnosis is crucial for ensuring maternal and fetal safety.<n>This paper proposes an improved model, EH-YOLOv11n, based on small-sample learning, aiming to achieve automatic detection of hematoma features in placental ultrasound images.<n> Experimental results demonstrate a detection accuracy of 78%, representing a 2.5% improvement over YOLOv11n and a 13.7% increase over YOLOv8.
- Score: 11.678844582870523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Placental abruption is a severe complication during pregnancy, and its early accurate diagnosis is crucial for ensuring maternal and fetal safety. Traditional ultrasound diagnostic methods heavily rely on physician experience, leading to issues such as subjective bias and diagnostic inconsistencies. This paper proposes an improved model, EH-YOLOv11n (Enhanced Hemorrhage-YOLOv11n), based on small-sample learning, aiming to achieve automatic detection of hematoma features in placental ultrasound images. The model enhances performance through multidimensional optimization: it integrates wavelet convolution and coordinate convolution to strengthen frequency and spatial feature extraction; incorporates a cascaded group attention mechanism to suppress ultrasound artifacts and occlusion interference, thereby improving bounding box localization accuracy. Experimental results demonstrate a detection accuracy of 78%, representing a 2.5% improvement over YOLOv11n and a 13.7% increase over YOLOv8. The model exhibits significant superiority in precision-recall curves, confidence scores, and occlusion scenarios. Combining high accuracy with real-time processing, this model provides a reliable solution for computer-aided diagnosis of placental abruption, holding significant clinical application value.
Related papers
- X-Mark: Saliency-Guided Robust Dataset Ownership Verification for Medical Imaging [67.85884025186755]
High-quality medical imaging datasets are essential for training deep learning models, but their unauthorized use raises serious copyright and ethical concerns.<n>Medical imaging presents a unique challenge for existing dataset ownership verification methods designed for natural images.<n>We propose X-Mark, a sample-specific clean-label watermarking method for chest x-ray copyright protection.
arXiv Detail & Related papers (2026-02-10T00:03:43Z) - Improved cystic hygroma detection from prenatal imaging using ultrasound-specific self-supervised representation learning [0.18058404137575482]
Cystic hygroma is a high-risk prenatal ultrasound finding that portends high rates of chromosomal abnormalities, structural malformations, and adverse pregnancy outcomes.<n>This study assesses whether ultrasound-specific self-supervised pretraining can facilitate accurate, robust deep learning detection of cystic hygroma in first-trimester ultrasound images.
arXiv Detail & Related papers (2025-12-28T00:07:26Z) - Deep Unsupervised Anomaly Detection in Brain Imaging: Large-Scale Benchmarking and Bias Analysis [42.60508892284938]
We present a large-scale, multi-center benchmark of deep unsupervised anomaly detection for brain imaging.<n>We tested 2,221 T1w and 1,262 T2w scans spanning healthy datasets and diverse clinical cohorts.<n>Our benchmark establishes a transparent foundation for future research and highlights priorities for clinical translation.
arXiv Detail & Related papers (2025-12-01T11:03:27Z) - Deep Learning Analysis of Prenatal Ultrasound for Identification of Ventriculomegaly [0.17476892297485447]
Ventriculomegaly is a prenatal condition characterized by dilated cerebral ventricles of the fetal brain.<n>The proposed model incorporates a Vision Transformer encoder pretrained on more than 370,000 ultrasound images from the OpenUS-46 corpus.<n>The model reached an F1-score of 91.76% on the 5-fold cross-validation and 91.78% on the independent test set.
arXiv Detail & Related papers (2025-11-11T04:45:48Z) - Validating Vision Transformers for Otoscopy: Performance and Data-Leakage Effects [42.465094107111646]
This study evaluates the efficacy of vision transformer models, specifically Swin transformers, in enhancing the diagnostic accuracy of ear diseases.<n>The research utilised a real-world dataset from the Department of Otolaryngology at the Clinical Hospital of the Universidad de Chile.
arXiv Detail & Related papers (2025-11-06T23:20:37Z) - Epistemic-aware Vision-Language Foundation Model for Fetal Ultrasound Interpretation [83.02147613524032]
We introduce FetalMind, a medical AI system tailored to fetal ultrasound for both report generation and diagnosis.<n>We propose Salient Epistemic Disentanglement (SED), which injects an expert-curated bipartite graph into the model to decouple view-disease associations.<n>FetalMind outperforms open- and closed-source baselines across all gestational stages, achieving +14% average gains and +61.2% higher accuracy on critical conditions.
arXiv Detail & Related papers (2025-10-14T19:57:03Z) - 3D Convolutional Neural Networks for Improved Detection of Intracranial bleeding in CT Imaging [0.0]
Intracranial bleeding (IB) is a life-threatening condition caused by traumatic brain injuries.<n>Traditional imaging can be slow and prone to variability, especially in high-pressure scenarios.<n>This article explores AI's role in transforming IB detection in emergency settings.
arXiv Detail & Related papers (2025-03-26T08:10:29Z) - Efficient Precision Control in Object Detection Models for Enhanced and Reliable Ovarian Follicle Counting [37.9434503914985]
A major challenge for machine learning is to control the precision of predictions while enabling a high recall.<n>We use a multiple testing procedure that gives an overperforming way to solve the standard Precision-Recall trade-off.<n>As it is model-agnostic, this contextual selection procedure paves the way to the development of a strategy that can improve the performance of any model without the need of retraining it.
arXiv Detail & Related papers (2025-01-23T19:04:47Z) - Multi-Center Study on Deep Learning-Assisted Detection and Classification of Fetal Central Nervous System Anomalies Using Ultrasound Imaging [11.261565838608488]
Prenatal ultrasound evaluates fetal growth and detects congenital abnormalities during pregnancy.<n>We construct a deep learning model to improve the overall accuracy of the diagnosis of fetal cranial anomalies.
arXiv Detail & Related papers (2025-01-01T07:56:26Z) - Privacy-Preserving Federated Foundation Model for Generalist Ultrasound Artificial Intelligence [83.02106623401885]
We present UltraFedFM, an innovative privacy-preserving ultrasound foundation model.
UltraFedFM is collaboratively pre-trained using federated learning across 16 distributed medical institutions in 9 countries.
It achieves an average area under the receiver operating characteristic curve of 0.927 for disease diagnosis and a dice similarity coefficient of 0.878 for lesion segmentation.
arXiv Detail & Related papers (2024-11-25T13:40:11Z) - Capsule Endoscopy Multi-classification via Gated Attention and Wavelet Transformations [1.5146068448101746]
Abnormalities in the gastrointestinal tract significantly influence the patient's health and require a timely diagnosis.<n>The work presents the process of developing and evaluating a novel model designed to classify gastrointestinal anomalies from a video frame.<n> integration of Omni Dimensional Gated Attention (OGA) mechanism and Wavelet transformation techniques into the model's architecture allowed the model to focus on the most critical areas.<n>The model's performance is benchmarked against two base models, VGG16 and ResNet50, demonstrating its enhanced ability to identify and classify a range of gastrointestinal abnormalities accurately.
arXiv Detail & Related papers (2024-10-25T08:01:35Z) - Efficient Feature Extraction Using Light-Weight CNN Attention-Based Deep Learning Architectures for Ultrasound Fetal Plane Classification [3.998431476275487]
We propose a lightweight artificial intelligence architecture to classify the largest benchmark ultrasound dataset.
The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k.
Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576.
arXiv Detail & Related papers (2024-10-22T20:02:38Z) - Enhancing Diagnostic Reliability of Foundation Model with Uncertainty Estimation in OCT Images [41.002573031087856]
We developed a foundation model with uncertainty estimation (FMUE) to detect 11 retinal conditions on optical coherence tomography ( OCT)
FMUE achieved a higher F1 score of 96.76% than two state-of-the-art algorithms, RETFound and UIOS, and got further improvement with thresholding strategy to 98.44%.
Our model is superior to two ophthalmologists with a higher F1 score (95.17% vs. 61.93% &71.72%)
arXiv Detail & Related papers (2024-06-18T03:04:52Z) - Interpretable cancer cell detection with phonon microscopy using multi-task conditional neural networks for inter-batch calibration [39.759100498329275]
We present a conditional neural network framework to simultaneously achieve inter-batch calibration.
We validate our approach by training and validating on different experimental batches.
We extend our model to reconstruct denoised signals, enabling physical interpretation of salient features indicating disease state.
arXiv Detail & Related papers (2024-03-26T12:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.