Contrastive Learning for View Classification of Echocardiograms
- URL: http://arxiv.org/abs/2108.03124v1
- Date: Fri, 6 Aug 2021 13:48:06 GMT
- Title: Contrastive Learning for View Classification of Echocardiograms
- Authors: Agisilaos Chartsias, Shan Gao, Angela Mumith, Jorge Oliveira, Kanwal
Bhatia, Bernhard Kainz, Arian Beqiri
- Abstract summary: We train view classification models for imbalanced cardiac ultrasound datasets and show improved performance for views/classes for which minimal labelled data is available.
Compared to a naive baseline model, we achieve an improvement in F1 score of up to 26% in those views while maintaining state-of-the-art performance for the views with sufficiently many labelled training observations.
- Score: 5.60187022176608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analysis of cardiac ultrasound images is commonly performed in routine
clinical practice for quantification of cardiac function. Its increasing
automation frequently employs deep learning networks that are trained to
predict disease or detect image features. However, such models are extremely
data-hungry and training requires labelling of many thousands of images by
experienced clinicians. Here we propose the use of contrastive learning to
mitigate the labelling bottleneck. We train view classification models for
imbalanced cardiac ultrasound datasets and show improved performance for
views/classes for which minimal labelled data is available. Compared to a naive
baseline model, we achieve an improvement in F1 score of up to 26% in those
views while maintaining state-of-the-art performance for the views with
sufficiently many labelled training observations.
Related papers
- Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - RadTex: Learning Efficient Radiograph Representations from Text Reports [7.090896766922791]
We build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data.
Our model achieves higher classification performance than ImageNet-supervised pretraining when labeled training data is limited.
arXiv Detail & Related papers (2022-08-05T15:06:26Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Encoding Cardiopulmonary Exercise Testing Time Series as Images for
Classification using Convolutional Neural Network [9.227037203895533]
Exercise testing has been available for more than a half-century and is a versatile tool for diagnostic and prognostic information of patients for a range of diseases.
In this work, we encode the time series as images using the Gramian Angular Field and Markov Transition Field.
We use it with a convolutional neural network and attention pooling approach for the classification of heart failure and metabolic syndrome patients.
arXiv Detail & Related papers (2022-04-26T16:49:06Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - A New Semi-supervised Learning Benchmark for Classifying View and
Diagnosing Aortic Stenosis from Echocardiograms [4.956777496509955]
We develop a benchmark dataset to assess semi-supervised approaches to two tasks relevant to cardiac ultrasound (echocardiogram) interpretation.
We find that a state-of-the-art method called MixMatch achieves promising gains in heldout accuracy on both tasks.
We pursue patient-level diagnosis prediction, which requires aggregating across hundreds of images of diverse view types.
arXiv Detail & Related papers (2021-07-30T21:08:12Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Big Self-Supervised Models Advance Medical Image Classification [36.23989703428874]
We study the effectiveness of self-supervised learning as a pretraining strategy for medical image classification.
We use a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case.
We show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images.
arXiv Detail & Related papers (2021-01-13T17:36:31Z) - Improving Medical Annotation Quality to Decrease Labeling Burden Using
Stratified Noisy Cross-Validation [3.690031561736533]
Variability in diagnosis of medical images is well established; variability in training and attention to task among medical labelers may exacerbate this issue.
Noisy Cross-Validation splits the training data into halves, and has been shown to identify low-quality labels in computer vision tasks.
In this work we introduce Stratified Noisy Cross-Validation (SNCV), an extension of noisy cross validation.
arXiv Detail & Related papers (2020-09-22T23:32:59Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.