Weakly Supervised Contrastive Learning for Better Severity Scoring of
Lung Ultrasound
- URL: http://arxiv.org/abs/2201.07357v1
- Date: Tue, 18 Jan 2022 23:45:18 GMT
- Title: Weakly Supervised Contrastive Learning for Better Severity Scoring of
Lung Ultrasound
- Authors: Gautam Rajendrakumar Gare, Hai V. Tran, Bennett P deBoisblanc, Ricardo
Luis Rodriguez, John Michael Galeotti
- Abstract summary: Several AI-based patient severity scoring models have been proposed that rely on scoring the appearance of the ultrasound scans.
We address the challenge of labeling every ultrasound frame in the video clips.
Our contrastive learning method treats the video clip severity labels as noisy weak severity labels for individual frames.
We show that it performs better than the conventional cross-entropy loss based training.
- Score: 0.044364554283083675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the onset of the COVID-19 pandemic, ultrasound has emerged as an
effective tool for bedside monitoring of patients. Due to this, a large amount
of lung ultrasound scans have been made available which can be used for AI
based diagnosis and analysis. Several AI-based patient severity scoring models
have been proposed that rely on scoring the appearance of the ultrasound scans.
AI models are trained using ultrasound-appearance severity scores that are
manually labeled based on standardized visual features. We address the
challenge of labeling every ultrasound frame in the video clips. Our
contrastive learning method treats the video clip severity labels as noisy weak
severity labels for individual frames, thus requiring only video-level labels.
We show that it performs better than the conventional cross-entropy loss based
training. We combine frame severity predictions to come up with video severity
predictions and show that the frame based model achieves comparable performance
to a video based TSM model, on a large dataset combining public and private
sources.
Related papers
- Intra-video Positive Pairs in Self-Supervised Learning for Ultrasound [65.23740556896654]
Self-supervised learning (SSL) is one strategy for addressing the paucity of labelled data in medical imaging.
In this study, we investigated the effect of utilizing proximal, distinct images from the same B-mode ultrasound video as pairs for SSL.
Named Intra-Video Positive Pairs (IVPP), the method surpassed previous ultrasound-specific contrastive learning methods' average test accuracy on COVID-19 classification.
arXiv Detail & Related papers (2024-03-12T14:57:57Z) - Feature-Conditioned Cascaded Video Diffusion Models for Precise
Echocardiogram Synthesis [5.102090025931326]
We extend elucidated diffusion models for video modelling to generate plausible video sequences from single images.
Our image to sequence approach achieves an $R2$ score of 93%, 38 points higher than recently proposed sequence to sequence generation methods.
arXiv Detail & Related papers (2023-03-22T15:26:22Z) - Adnexal Mass Segmentation with Ultrasound Data Synthesis [3.614586930645965]
Using supervised learning, we have demonstrated that segmentation of adnexal masses is possible.
We apply a novel pathology-specific data synthesiser to create synthetic medical images with their corresponding ground truth segmentations.
Our approach achieves the best performance across all classes, including an improvement of up to 8% when compared with nnU-Net baseline approaches.
arXiv Detail & Related papers (2022-09-25T19:24:02Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - SVTS: Scalable Video-to-Speech Synthesis [105.29009019733803]
We introduce a scalable video-to-speech framework consisting of two components: a video-to-spectrogram predictor and a pre-trained neural vocoder.
We are the first to show intelligible results on the challenging LRS3 dataset.
arXiv Detail & Related papers (2022-05-04T13:34:07Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Unsupervised multi-latent space reinforcement learning framework for
video summarization in ultrasound imaging [0.0]
The COVID-19 pandemic has highlighted the need for a tool to speed up triage in ultrasound scans.
The proposed video-summarization technique is a step in this direction.
We propose a new unsupervised reinforcement learning framework with novel rewards.
arXiv Detail & Related papers (2021-09-03T04:50:35Z) - Self-supervised Contrastive Video-Speech Representation Learning for
Ultrasound [15.517484333872277]
In medical imaging, manual annotations can be expensive to acquire and sometimes infeasible to access.
We propose to address the problem of self-supervised representation learning with multi-modal ultrasound video-speech raw data.
arXiv Detail & Related papers (2020-08-14T23:58:23Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.