One-Vote Veto: Semi-Supervised Learning for Low-Shot Glaucoma Diagnosis
- URL: http://arxiv.org/abs/2012.04841v4
- Date: Mon, 21 Aug 2023 15:23:10 GMT
- Title: One-Vote Veto: Semi-Supervised Learning for Low-Shot Glaucoma Diagnosis
- Authors: Rui Fan, Christopher Bowd, Nicole Brye, Mark Christopher, Robert N.
Weinreb, David Kriegman, Linda M. Zangwill
- Abstract summary: Convolutional neural networks (CNNs) are a promising technique for automated glaucoma diagnosis from images of the fundus.
CNNs typically require a large amount of well-labeled data for training, which may not be available in many biomedical image classification applications.
This article makes two contributions to address this issue: (1) It extends the conventional Siamese network and introduces a training method for low-shot learning when labeled data are limited and imbalanced, and (2) it introduces a novel semi-supervised learning strategy that uses additional unlabeled training data to achieve greater accuracy.
- Score: 3.4069019052564506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) are a promising technique for automated
glaucoma diagnosis from images of the fundus, and these images are routinely
acquired as part of an ophthalmic exam. Nevertheless, CNNs typically require a
large amount of well-labeled data for training, which may not be available in
many biomedical image classification applications, especially when diseases are
rare and where labeling by experts is costly. This article makes two
contributions to address this issue: (1) It extends the conventional Siamese
network and introduces a training method for low-shot learning when labeled
data are limited and imbalanced, and (2) it introduces a novel semi-supervised
learning strategy that uses additional unlabeled training data to achieve
greater accuracy. Our proposed multi-task Siamese network (MTSN) can employ any
backbone CNN, and we demonstrate with four backbone CNNs that its accuracy with
limited training data approaches the accuracy of backbone CNNs trained with a
dataset that is 50 times larger. We also introduce One-Vote Veto (OVV)
self-training, a semi-supervised learning strategy that is designed
specifically for MTSNs. By taking both self-predictions and contrastive
predictions of the unlabeled training data into account, OVV self-training
provides additional pseudo labels for fine-tuning a pre-trained MTSN. Using a
large (imbalanced) dataset with 66,715 fundus photographs acquired over 15
years, extensive experimental results demonstrate the effectiveness of low-shot
learning with MTSN and semi-supervised learning with OVV self-training. Three
additional, smaller clinical datasets of fundus images acquired under different
conditions (cameras, instruments, locations, populations) are used to
demonstrate the generalizability of the proposed methods.
Related papers
- A BERT-Style Self-Supervised Learning CNN for Disease Identification from Retinal Images [5.0086124858415335]
In medical imaging research, the acquisition of high-quality labels is both expensive and difficult.
In this study, we employ nn-MobileNet, a lightweight CNN framework, to implement a BERT-style self-supervised learning approach.
We validate the results of the pre-trained model on Alzheimer's disease (AD), Parkinson's disease (PD), and various retinal diseases identification.
arXiv Detail & Related papers (2025-04-25T03:38:55Z) - Self-Supervised Learning for Pre-training Capsule Networks: Overcoming Medical Imaging Dataset Challenges [2.9248916859490173]
This study investigates self-supervised learning methods for pre-training capsule networks in polyp diagnostics for colon cancer.
We used the PICCOLO dataset, comprising 3,433 samples, which exemplifies typical challenges in medical datasets.
Our findings suggest contrastive learning and in-painting techniques are suitable auxiliary tasks for self-supervised learning in the medical domain.
arXiv Detail & Related papers (2025-02-07T08:32:26Z) - Applications of Sequential Learning for Medical Image Classification [0.13191970195165517]
We develop a neural network training framework for continual training of small amounts of medical imaging data.
We address problems that impede sequential learning such as overfitting, catastrophic forgetting, and concept drift.
arXiv Detail & Related papers (2023-09-26T00:46:25Z) - Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - One Representative-Shot Learning Using a Population-Driven Template with
Application to Brain Connectivity Classification and Evolution Prediction [0.0]
Graph neural networks (GNNs) have been introduced to the field of network neuroscience.
We take a very different approach in training GNNs, where we aim to learn with one sample and achieve the best performance.
We present the first one-shot paradigm where a GNN is trained on a single population-driven template.
arXiv Detail & Related papers (2021-10-06T08:36:00Z) - About Explicit Variance Minimization: Training Neural Networks for
Medical Imaging With Limited Data Annotations [2.3204178451683264]
Variance Aware Training (VAT) method exploits this property by introducing the variance error into the model loss function.
We validate VAT on three medical imaging datasets from diverse domains and various learning objectives.
arXiv Detail & Related papers (2021-05-28T21:34:04Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Self-Loop Uncertainty: A Novel Pseudo-Label for Semi-Supervised Medical
Image Segmentation [30.644905857223474]
We propose a semi-supervised approach to train neural networks with limited labeled data and a large quantity of unlabeled images for medical image segmentation.
A novel pseudo-label (namely self-loop uncertainty) is adopted as the ground-truth for the unlabeled images to augment the training set and boost the segmentation accuracy.
arXiv Detail & Related papers (2020-07-20T02:52:07Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.