A Real Use Case of Semi-Supervised Learning for Mammogram Classification
in a Local Clinic of Costa Rica
- URL: http://arxiv.org/abs/2107.11696v1
- Date: Sat, 24 Jul 2021 22:26:50 GMT
- Title: A Real Use Case of Semi-Supervised Learning for Mammogram Classification
in a Local Clinic of Costa Rica
- Authors: Saul Calderon-Ramirez, Diego Murillo-Hernandez, Kevin Rojas-Salazar,
David Elizondo, Shengxiang Yang, Miguel Molina-Cabello
- Abstract summary: Training a deep learning model requires a considerable amount of labeled images.
A number of publicly available datasets have been built with data from different hospitals and clinics.
The use of the semi-supervised deep learning approach known as MixMatch, to leverage the usage of unlabeled data is proposed and evaluated.
- Score: 0.5541644538483946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The implementation of deep learning based computer aided diagnosis systems
for the classification of mammogram images can help in improving the accuracy,
reliability, and cost of diagnosing patients. However, training a deep learning
model requires a considerable amount of labeled images, which can be expensive
to obtain as time and effort from clinical practitioners is required. A number
of publicly available datasets have been built with data from different
hospitals and clinics. However, using models trained on these datasets for
later work on images sampled from a different hospital or clinic might result
in lower performance. This is due to the distribution mismatch of the datasets,
which include different patient populations and image acquisition protocols.
The scarcity of labeled data can also bring a challenge towards the application
of transfer learning with models trained using these source datasets. In this
work, a real world scenario is evaluated where a novel target dataset sampled
from a private Costa Rican clinic is used, with few labels and heavily
imbalanced data. The use of two popular and publicly available datasets
(INbreast and CBIS-DDSM) as source data, to train and test the models on the
novel target dataset, is evaluated. The use of the semi-supervised deep
learning approach known as MixMatch, to leverage the usage of unlabeled data
from the target dataset, is proposed and evaluated. In the tests, the
performance of models is extensively measured, using different metrics to
assess the performance of a classifier under heavy data imbalance conditions.
It is shown that the use of semi-supervised deep learning combined with
fine-tuning can provide a meaningful advantage when using scarce labeled
observations. We make available the novel dataset for the benefit of the
community.
Related papers
- Refining Tuberculosis Detection in CXR Imaging: Addressing Bias in Deep Neural Networks via Interpretability [1.9936075659851882]
We argue that the reliability of deep learning models is limited, even if they can be shown to obtain perfect classification accuracy on the test data.
We show that pre-training a deep neural network on a large-scale proxy task, as well as using mixed objective optimization network (MOON), can improve the alignment of decision foundations between models and experts.
arXiv Detail & Related papers (2024-07-19T06:41:31Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - MedFMC: A Real-world Dataset and Benchmark For Foundation Model
Adaptation in Medical Image Classification [41.16626194300303]
Foundation models, often pre-trained with large-scale data, have achieved paramount success in jump-starting various vision and language applications.
Recent advances further enable adapting foundation models in downstream tasks efficiently using only a few training samples.
Yet, the application of such learning paradigms in medical image analysis remains scarce due to the shortage of publicly accessible data and benchmarks.
arXiv Detail & Related papers (2023-06-16T01:46:07Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Unsupervised pre-training of graph transformers on patient population
graphs [48.02011627390706]
We propose a graph-transformer-based network to handle heterogeneous clinical data.
We show the benefit of our pre-training method in a self-supervised and a transfer learning setting.
arXiv Detail & Related papers (2022-07-21T16:59:09Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Deep learning-based COVID-19 pneumonia classification using chest CT
images: model generalizability [54.86482395312936]
Deep learning (DL) classification models were trained to identify COVID-19-positive patients on 3D computed tomography (CT) datasets from different countries.
We trained nine identical DL-based classification models by using combinations of the datasets with a 72% train, 8% validation, and 20% test data split.
The models trained on multiple datasets and evaluated on a test set from one of the datasets used for training performed better.
arXiv Detail & Related papers (2021-02-18T21:14:52Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - On the Composition and Limitations of Publicly Available COVID-19 X-Ray
Imaging Datasets [0.0]
Data scarcity, mismatch between training and target population, group imbalance, and lack of documentation are important sources of bias.
This paper presents an overview of the currently public available COVID-19 chest X-ray datasets.
arXiv Detail & Related papers (2020-08-26T14:16:01Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.