Self-Supervised Curricular Deep Learning for Chest X-Ray Image
Classification
- URL: http://arxiv.org/abs/2301.10687v1
- Date: Wed, 25 Jan 2023 16:45:13 GMT
- Title: Self-Supervised Curricular Deep Learning for Chest X-Ray Image
Classification
- Authors: Iv\'an de Andr\'es Tam\'e, Kirill Sirotkin, Pablo Carballeira, Marcos
Escudero-Vi\~nolo
- Abstract summary: Self-Supervised Learning pretraining outperforms models trained from scratch or pretrained on ImageNet.
Top-performing SSLpretrained models show a higher degree of attention in the lung regions.
- Score: 1.6631602844999727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning technologies have already demonstrated a high potential to
build diagnosis support systems from medical imaging data, such as Chest X-Ray
images. However, the shortage of labeled data in the medical field represents
one key obstacle to narrow down the performance gap with respect to
applications in other image domains. In this work, we investigate the benefits
of a curricular Self-Supervised Learning (SSL) pretraining scheme with respect
to fully-supervised training regimes for pneumonia recognition on Chest X-Ray
images of Covid-19 patients. We show that curricular SSL pretraining, which
leverages unlabeled data, outperforms models trained from scratch, or
pretrained on ImageNet, indicating the potential of performance gains by SSL
pretraining on massive unlabeled datasets. Finally, we demonstrate that
top-performing SSLpretrained models show a higher degree of attention in the
lung regions, embodying models that may be more robust to possible external
confounding factors in the training datasets, identified by previous works.
Related papers
- Self-supervised learning for skin cancer diagnosis with limited training data [0.196629787330046]
Self-supervised learning (SSL) is an alternative to the standard supervised pre-training on ImageNet data for scenarios with limited training data.
We find that minimal further SSL pre-training on task-specific data can be as effective as large-scale SSL pre-training on ImageNet for medical image classification tasks with limited labelled data.
arXiv Detail & Related papers (2024-01-01T08:11:38Z) - CXR-CLIP: Toward Large Scale Chest X-ray Language-Image Pre-training [6.292642131180376]
In this paper, we tackle the lack of image-text data in chest X-ray by expanding image-label pair as image-text pair via general prompt.
We also design two contrastive losses, named ICL and TCL, for learning study-level characteristics of medical images and reports.
Our model outperforms the state-of-the-art models trained under the same conditions.
arXiv Detail & Related papers (2023-10-20T05:44:55Z) - MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep
Models for X-ray Images of Multiple Body Parts [63.30352394004674]
Multi-task Self-super-vised Continual Learning (MUSCLE) is a novel self-supervised pre-training pipeline for medical imaging tasks.
MUSCLE aggregates X-rays collected from multiple body parts for representation learning, and adopts a well-designed continual learning procedure.
We evaluate MUSCLE using 9 real-world X-ray datasets with various tasks, including pneumonia classification, skeletal abnormality classification, lung segmentation, and tuberculosis (TB) detection.
arXiv Detail & Related papers (2023-10-03T12:19:19Z) - Enhancing Network Initialization for Medical AI Models Using
Large-Scale, Unlabeled Natural Images [1.883452979588382]
Self-supervised learning (SSL) can be applied to chest radiographs to learn robust features.
We tested our approach on over 800,000 chest radiographs from six large global datasets.
arXiv Detail & Related papers (2023-08-15T10:37:13Z) - Forward-Forward Contrastive Learning [4.465144120325802]
We propose Forward Forward Contrastive Learning (FFCL) as a novel pretraining approach for medical image classification.
FFCL achieves superior performance (3.69% accuracy over ImageNet pretrained ResNet-18) over existing pretraining models in the pneumonia classification task.
arXiv Detail & Related papers (2023-05-04T15:29:06Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Few-Shot Transfer Learning to improve Chest X-Ray pathology detection
using limited triplets [0.0]
Deep learning approaches have reached near-human or better-than-human performance on many diagnostic tasks.
We introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning.
arXiv Detail & Related papers (2022-04-16T15:44:56Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.