Self-supervised Mean Teacher for Semi-supervised Chest X-ray
Classification
- URL: http://arxiv.org/abs/2103.03629v1
- Date: Fri, 5 Mar 2021 12:25:36 GMT
- Title: Self-supervised Mean Teacher for Semi-supervised Chest X-ray
Classification
- Authors: Fengbei Liu, Yu Tian, Filipe R. Cordeiro, Vasileios Belagiannis, Ian
Reid, Gustavo Carneiro
- Abstract summary: We propose the Self-supervised Mean Teacher for Semi-supervised learning.
It combines self-supervised mean-teacher pre-training with semi-supervised fine-tuning.
We show that it outperforms the previous SOTA semi-supervised learning methods by a large margin.
- Score: 37.79118840129632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The training of deep learning models generally requires a large amount of
annotated data for effective convergence and generalisation. However, obtaining
high-quality annotations is a laboursome and expensive process due to the need
of expert radiologists for the labelling task. The study of semi-supervised
learning in medical image analysis is then of crucial importance given that it
is much less expensive to obtain unlabelled images than to acquire images
labelled by expert radiologists.Essentially, semi-supervised methods leverage
large sets of unlabelled data to enable better training convergence and
generalisation than if we use only the small set of labelled images.In this
paper, we propose the Self-supervised Mean Teacher for Semi-supervised
(S$^2$MTS$^2$) learning that combines self-supervised mean-teacher pre-training
with semi-supervised fine-tuning. The main innovation of S$^2$MTS$^2$ is the
self-supervised mean-teacher pre-training based on the joint contrastive
learning, which uses an infinite number of pairs of positive query and key
features to improve the mean-teacher representation. The model is then
fine-tuned using the exponential moving average teacher framework trained with
semi-supervised learning.We validate S$^2$MTS$^2$ on the thorax disease
multi-label classification problem from the dataset Chest X-ray14, where we
show that it outperforms the previous SOTA semi-supervised learning methods by
a large margin.
Related papers
- One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - An End-to-End Framework For Universal Lesion Detection With Missing
Annotations [24.902835211573628]
We present a novel end-to-end framework for mining unlabeled lesions while simultaneously training the detector.
Our framework follows the teacher-student paradigm. High-confidence predictions are combined with partially-labeled ground truth for training the student model.
arXiv Detail & Related papers (2023-03-27T09:16:10Z) - Consistency-Based Semi-supervised Evidential Active Learning for
Diagnostic Radiograph Classification [2.3545156585418328]
We introduce a novel Consistency-based Semi-supervised Evidential Active Learning framework (CSEAL)
We leverage predictive uncertainty based on theories of evidence and subjective logic to develop an end-to-end integrated approach.
Our approach can substantially improve accuracy on rarer abnormalities with fewer labelled samples.
arXiv Detail & Related papers (2022-09-05T09:28:31Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Semi-supervised classification of radiology images with NoTeacher: A
Teacher that is not Mean [10.880392855729552]
We introduce NoTeacher, a novel consistency-based semi-supervised learning framework.
NoTeacher employs two independent networks, eliminating the need for a teacher network.
We show that NoTeacher achieves over 90-95% of the fully supervised AUROC with less than 5-15% labeling budget.
arXiv Detail & Related papers (2021-08-10T03:08:35Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Neural Semi-supervised Learning for Text Classification Under
Large-Scale Pretraining [51.19885385587916]
We conduct studies on semi-supervised learning in the task of text classification under the context of large-scale LM pretraining.
Our work marks an initial step in understanding the behavior of semi-supervised learning models under the context of large-scale pretraining.
arXiv Detail & Related papers (2020-11-17T13:39:05Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - ATSO: Asynchronous Teacher-Student Optimization for Semi-Supervised
Medical Image Segmentation [99.90263375737362]
We propose ATSO, an asynchronous version of teacher-student optimization.
ATSO partitions the unlabeled data into two subsets and alternately uses one subset to fine-tune the model and updates the label on the other subset.
We evaluate ATSO on two popular medical image segmentation datasets and show its superior performance in various semi-supervised settings.
arXiv Detail & Related papers (2020-06-24T04:05:12Z) - Partly Supervised Multitask Learning [19.64371980996412]
Experimental results on chest and spine X-ray datasets suggest that our S$4$MTL model significantly outperforms semi-supervised single task, semi/fully-supervised multitask, and fully-supervised single task models.
We hypothesize that our proposed model can be effective in tackling limited annotation problems for joint training, not only in medical imaging domains, but also for general-purpose vision tasks.
arXiv Detail & Related papers (2020-05-05T22:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.