AI can evolve without labels: self-evolving vision transformer for chest
X-ray diagnosis through knowledge distillation
- URL: http://arxiv.org/abs/2202.06431v1
- Date: Sun, 13 Feb 2022 22:40:46 GMT
- Title: AI can evolve without labels: self-evolving vision transformer for chest
X-ray diagnosis through knowledge distillation
- Authors: Sangjoon Park, Gwanghyun Kim, Yujin Oh, Joon Beom Seo, Sang Min Lee,
Jin Hwan Kim, Sungjun Moon, Jae-Kwang Lim, Chang Min Park, and Jong Chul Ye
- Abstract summary: We present a novel deep learning framework that uses knowledge distillation through self-supervised learning and self-training.
Experimental results show that the proposed framework maintains impressive robustness against a real-world environment.
The proposed framework has a great potential for medical imaging, where plenty of data is accumulated every year.
- Score: 30.075714642990768
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Although deep learning-based computer-aided diagnosis systems have recently
achieved expert-level performance, developing a robust deep learning model
requires large, high-quality data with manual annotation, which is expensive to
obtain. This situation poses the problem that the chest x-rays collected
annually in hospitals cannot be used due to the lack of manual labeling by
experts, especially in deprived areas. To address this, here we present a novel
deep learning framework that uses knowledge distillation through
self-supervised learning and self-training, which shows that the performance of
the original model trained with a small number of labels can be gradually
improved with more unlabeled data. Experimental results show that the proposed
framework maintains impressive robustness against a real-world environment and
has general applicability to several diagnostic tasks such as tuberculosis,
pneumothorax, and COVID-19. Notably, we demonstrated that our model performs
even better than those trained with the same amount of labeled data. The
proposed framework has a great potential for medical imaging, where plenty of
data is accumulated every year, but ground truth annotations are expensive to
obtain.
Related papers
- Automated Labeling of German Chest X-Ray Radiology Reports using Deep
Learning [50.591267188664666]
We propose a deep learning-based CheXpert label prediction model, pre-trained on reports labeled by a rule-based German CheXpert model.
Our results demonstrate the effectiveness of our approach, which significantly outperformed the rule-based model on all three tasks.
arXiv Detail & Related papers (2023-06-09T16:08:35Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - Time-based Self-supervised Learning for Wireless Capsule Endoscopy [1.3514953384460016]
This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method.
We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance.
arXiv Detail & Related papers (2022-04-20T20:31:06Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Towards Trainable Saliency Maps in Medical Imaging [4.438919530397659]
We show how introducing a model design element agnostic to both architecture complexity and model task gives an inherently self-explanatory model.
We compare our results with state of the art non-trainable saliency maps on RSNA Pneumonia dataset and demonstrate a much higher localization efficacy using our adopted technique.
arXiv Detail & Related papers (2020-11-15T09:01:55Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.