Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature Extractor
- URL: http://arxiv.org/abs/2501.12524v1
- Date: Tue, 21 Jan 2025 22:28:22 GMT
- Title: Efficient Lung Ultrasound Severity Scoring Using Dedicated Feature Extractor
- Authors: Jiaqi Guo, Yunnan Wu, Evangelos Kaimakamis, Georgios Petmezas, Vasileios E. Papageorgiou, Nicos Maglaveras, Aggelos K. Katsaggelos,
- Abstract summary: MeDiVLAD is a novel pipeline for multi-level lung-ultrasound severity scoring.
We leverage self-knowledge distillation to pretrain a vision transformer (ViT) without label and aggregate frame-level features.
We show that with minimal finetuning, MeDiVLAD outperforms conventional fully-supervised methods in both frame- and video-level scoring.
- Score: 11.77784387664675
- License:
- Abstract: With the advent of the COVID-19 pandemic, ultrasound imaging has emerged as a promising technique for COVID-19 detection, due to its non-invasive nature, affordability, and portability. In response, researchers have focused on developing AI-based scoring systems to provide real-time diagnostic support. However, the limited size and lack of proper annotation in publicly available ultrasound datasets pose significant challenges for training a robust AI model. This paper proposes MeDiVLAD, a novel pipeline to address the above issue for multi-level lung-ultrasound (LUS) severity scoring. In particular, we leverage self-knowledge distillation to pretrain a vision transformer (ViT) without label and aggregate frame-level features via dual-level VLAD aggregation. We show that with minimal finetuning, MeDiVLAD outperforms conventional fully-supervised methods in both frame- and video-level scoring, while offering classification reasoning with exceptional quality. This superior performance enables key applications such as the automatic identification of critical lung pathology areas and provides a robust solution for broader medical video classification tasks.
Related papers
- Multi-Scale Feature Fusion with Image-Driven Spatial Integration for Left Atrium Segmentation from Cardiac MRI Images [0.0]
We propose a framework that integrates DINOv2 as an encoder with a UNet-style decoder, incorporating multi-scale feature fusion and input image integration.
We validate our approach on the LAScarQS 2022 dataset and demonstrate improved performance with a 92.3% Dice and 84.1% IoU score for giant architecture.
arXiv Detail & Related papers (2025-02-10T16:12:46Z) - Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Deep denoising autoencoder-based non-invasive blood flow detection for
arteriovenous fistula [10.030431512848239]
We propose an approach based on deep denoising autoencoders (DAEs) that perform dimensionality reduction and reconstruction tasks.
Our results demonstrate that the latent representation generated by the DAE surpasses expectations with an accuracy of 0.93.
The incorporation of noise-mixing and the utilization of a noise-to-clean scheme effectively enhance the discriminative capabilities of the latent representation.
arXiv Detail & Related papers (2023-06-12T04:46:01Z) - An Evaluation of Lightweight Deep Learning Techniques in Medical Imaging
for High Precision COVID-19 Diagnostics [0.0]
Decision support systems relax the challenges inherent to the physical examination of images.
Most deep learning algorithms utilised approaches are not amenable to implementation on resource-constrained devices.
This paper presents the development and evaluation of the performance of lightweight deep learning technique for the detection of COVID-19 using the MobileNetV2 model.
arXiv Detail & Related papers (2023-05-30T13:14:03Z) - Toward Robust Diagnosis: A Contour Attention Preserving Adversarial
Defense for COVID-19 Detection [10.953610196636784]
We propose a Contour Attention Preserving (CAP) method based on lung cavity edge extraction.
Experimental results indicate that the proposed method achieves state-of-the-art performance in multiple adversarial defense and generalization tasks.
arXiv Detail & Related papers (2022-11-30T08:01:23Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Factored Attention and Embedding for Unstructured-view Topic-related
Ultrasound Report Generation [70.7778938191405]
We propose a novel factored attention and embedding model (termed FAE-Gen) for the unstructured-view topic-related ultrasound report generation.
The proposed FAE-Gen mainly consists of two modules, i.e., view-guided factored attention and topic-oriented factored embedding, which capture the homogeneous and heterogeneous morphological characteristic across different views.
arXiv Detail & Related papers (2022-03-12T15:24:03Z) - Weakly Supervised Contrastive Learning for Better Severity Scoring of
Lung Ultrasound [0.044364554283083675]
Several AI-based patient severity scoring models have been proposed that rely on scoring the appearance of the ultrasound scans.
We address the challenge of labeling every ultrasound frame in the video clips.
Our contrastive learning method treats the video clip severity labels as noisy weak severity labels for individual frames.
We show that it performs better than the conventional cross-entropy loss based training.
arXiv Detail & Related papers (2022-01-18T23:45:18Z) - Towards Unbiased COVID-19 Lesion Localisation and Segmentation via
Weakly Supervised Learning [66.36706284671291]
We propose a data-driven framework supervised by only image-level labels to support unbiased lesion localisation.
The framework can explicitly separate potential lesions from original images, with the help of a generative adversarial network and a lesion-specific decoder.
arXiv Detail & Related papers (2021-03-01T06:05:49Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.