Few-Shot Transfer Learning to improve Chest X-Ray pathology detection
using limited triplets
- URL: http://arxiv.org/abs/2204.07824v1
- Date: Sat, 16 Apr 2022 15:44:56 GMT
- Title: Few-Shot Transfer Learning to improve Chest X-Ray pathology detection
using limited triplets
- Authors: Ananth Reddy Bhimireddy, John Lee Burns, Saptarshi Purkayastha, Judy
Wawira Gichoya
- Abstract summary: Deep learning approaches have reached near-human or better-than-human performance on many diagnostic tasks.
We introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning approaches applied to medical imaging have reached near-human
or better-than-human performance on many diagnostic tasks. For instance, the
CheXpert competition on detecting pathologies in chest x-rays has shown
excellent multi-class classification performance. However, training and
validating deep learning models require extensive collections of images and
still produce false inferences, as identified by a human-in-the-loop. In this
paper, we introduce a practical approach to improve the predictions of a
pre-trained model through Few-Shot Learning (FSL). After training and
validating a model, a small number of false inference images are collected to
retrain the model using \textbf{\textit{Image Triplets}} - a false positive or
false negative, a true positive, and a true negative. The retrained FSL model
produces considerable gains in performance with only a few epochs and few
images. In addition, FSL opens rapid retraining opportunities for
human-in-the-loop systems, where a radiologist can relabel false inferences,
and the model can be quickly retrained. We compare our retrained model
performance with existing FSL approaches in medical imaging that train and
evaluate models at once.
Related papers
- FSL-Rectifier: Rectify Outliers in Few-Shot Learning via Test-Time Augmentation [7.477118370563593]
Few-shot-learning (FSL) commonly requires a model to identify images (queries) that belong to classes unseen during training.
We generate additional test-class samples by combining original samples with suitable train-class samples via a generative image combiner.
We obtain averaged features via an augmentor, which leads to more typical representations through the averaging.
arXiv Detail & Related papers (2024-02-28T12:37:30Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - MUSCLE: Multi-task Self-supervised Continual Learning to Pre-train Deep
Models for X-ray Images of Multiple Body Parts [63.30352394004674]
Multi-task Self-super-vised Continual Learning (MUSCLE) is a novel self-supervised pre-training pipeline for medical imaging tasks.
MUSCLE aggregates X-rays collected from multiple body parts for representation learning, and adopts a well-designed continual learning procedure.
We evaluate MUSCLE using 9 real-world X-ray datasets with various tasks, including pneumonia classification, skeletal abnormality classification, lung segmentation, and tuberculosis (TB) detection.
arXiv Detail & Related papers (2023-10-03T12:19:19Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - FoPro-KD: Fourier Prompted Effective Knowledge Distillation for
Long-Tailed Medical Image Recognition [5.64283273944314]
We propose FoPro-KD, a framework that leverages the power of frequency patterns learned from frozen pre-trained models to enhance their transferability and compression.
We demonstrate that leveraging representations from publicly available pre-trained models can substantially improve performance, specifically for rare classes.
Our framework outperforms existing methods, enabling more accessible medical models for rare disease classification.
arXiv Detail & Related papers (2023-05-27T09:01:21Z) - Self-Supervised Curricular Deep Learning for Chest X-Ray Image
Classification [1.6631602844999727]
Self-Supervised Learning pretraining outperforms models trained from scratch or pretrained on ImageNet.
Top-performing SSLpretrained models show a higher degree of attention in the lung regions.
arXiv Detail & Related papers (2023-01-25T16:45:13Z) - Generative Transfer Learning: Covid-19 Classification with a few Chest
X-ray Images [0.0]
Deep learning models can expedite interpretation and alleviate the work of human experts.
Deep Transfer Learning addresses this problem by using a pretrained model in the public domain.
We present 1 a simpler generative source model, pretrained on a single but related concept, can perform as effectively as existing larger pretrained models.
arXiv Detail & Related papers (2022-08-10T12:37:52Z) - Multi-task UNet: Jointly Boosting Saliency Prediction and Disease
Classification on Chest X-ray Images [3.8637285238278434]
This paper describes a novel deep learning model for visual saliency prediction on chest X-ray (CXR) images.
To cope with data deficiency, we exploit the multi-task learning method and tackles disease classification on CXR simultaneously.
Experiments show our proposed deep learning model with our new learning scheme can outperform existing methods dedicated either for saliency prediction or image classification.
arXiv Detail & Related papers (2022-02-15T01:12:42Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.