Severity Quantification and Lesion Localization of COVID-19 on CXR using
Vision Transformer
- URL: http://arxiv.org/abs/2103.07062v1
- Date: Fri, 12 Mar 2021 03:17:19 GMT
- Title: Severity Quantification and Lesion Localization of COVID-19 on CXR using
Vision Transformer
- Authors: Gwanghyun Kim, Sangjoon Park, Yujin Oh, Joon Beom Seo, Sang Min Lee,
Jin Hwan Kim, Sungjun Moon, Jae-Kwang Lim, Jong Chul Ye
- Abstract summary: Under the global pandemic of COVID-19, building an automated framework that quantifies the severity of COVID-19 has become increasingly important.
We propose a novel Vision Transformer tailored for both quantification of the severity and clinically applicable localization of the COVID-19 related lesions.
Our model is trained in a weakly-supervised manner to generate the full probability maps from weak array-based labels.
- Score: 25.144248675578286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Under the global pandemic of COVID-19, building an automated framework that
quantifies the severity of COVID-19 and localizes the relevant lesion on chest
X-ray images has become increasingly important. Although pixel-level lesion
severity labels, e.g. lesion segmentation, can be the most excellent target to
build a robust model, collecting enough data with such labels is difficult due
to time and labor-intensive annotation tasks. Instead, array-based severity
labeling that assigns integer scores on six subdivisions of lungs can be an
alternative choice enabling the quick labeling. Several groups proposed deep
learning algorithms that quantify the severity of COVID-19 using the
array-based COVID-19 labels and localize the lesions with explainability maps.
To further improve the accuracy and interpretability, here we propose a novel
Vision Transformer tailored for both quantification of the severity and
clinically applicable localization of the COVID-19 related lesions. Our model
is trained in a weakly-supervised manner to generate the full probability maps
from weak array-based labels. Furthermore, a novel progressive self-training
method enables us to build a model with a small labeled dataset. The
quantitative and qualitative analysis on the external testset demonstrates that
our method shows comparable performance with radiologists for both tasks with
stability in a real-world application.
Related papers
- Domain Adaptation Using Pseudo Labels for COVID-19 Detection [19.844531606142496]
We present a two-stage framework that leverages pseudo labels for domain adaptation to enhance the detection of COVID-19 from CT scans.
By utilizing annotated data from one domain and non-annotated data from another, the model overcomes the challenge of data scarcity and variability.
Experimental results on COV19-CT-DB database showcase the model's potential to achieve high diagnostic precision.
arXiv Detail & Related papers (2024-03-18T06:07:45Z) - Transfer learning with weak labels from radiology reports: application
to glioma change detection [0.2010294990327175]
We propose a combined use of weak labels (imprecise, but fast-to-create annotations) and Transfer Learning (TL)
Specifically, we explore inductive TL, where source and target domains are identical, but tasks are different due to a label shift.
We investigate the relationship between model size and TL, comparing a low-capacity VGG with a higher-capacity SEResNeXt.
arXiv Detail & Related papers (2022-10-18T09:15:27Z) - Optimising Chest X-Rays for Image Analysis by Identifying and Removing
Confounding Factors [49.005337470305584]
During the COVID-19 pandemic, the sheer volume of imaging performed in an emergency setting for COVID-19 diagnosis has resulted in a wide variability of clinical CXR acquisitions.
The variable quality of clinically-acquired CXRs within publicly available datasets could have a profound effect on algorithm performance.
We propose a simple and effective step-wise approach to pre-processing a COVID-19 chest X-ray dataset to remove undesired biases.
arXiv Detail & Related papers (2022-08-22T13:57:04Z) - Vision Transformer using Low-level Chest X-ray Feature Corpus for
COVID-19 Diagnosis and Severity Quantification [25.144248675578286]
We propose a novel Vision Transformer that utilizes low-level CXR feature corpus obtained from a backbone network.
The backbone network is first trained with large public datasets to detect common abnormal findings.
Then, the embedded features from the backbone network are used as corpora for a Transformer model for the diagnosis and the severity quantification of COVID-19.
arXiv Detail & Related papers (2021-04-15T04:54:48Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - Towards Unbiased COVID-19 Lesion Localisation and Segmentation via
Weakly Supervised Learning [66.36706284671291]
We propose a data-driven framework supervised by only image-level labels to support unbiased lesion localisation.
The framework can explicitly separate potential lesions from original images, with the help of a generative adversarial network and a lesion-specific decoder.
arXiv Detail & Related papers (2021-03-01T06:05:49Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions
Segmentation [79.58311369297635]
We propose a new weakly-supervised lesions transfer framework, which can explore transferable domain-invariant knowledge across different datasets.
A Wasserstein quantified transferability framework is developed to highlight widerange transferable contextual dependencies.
A novel self-supervised pseudo label generator is designed to equally provide confident pseudo pixel labels for both hard-to-transfer and easy-to-transfer target samples.
arXiv Detail & Related papers (2020-12-08T02:26:03Z) - GraphXCOVID: Explainable Deep Graph Diffusion Pseudo-Labelling for
Identifying COVID-19 on Chest X-rays [4.566180616886624]
We introduce a graph based deep semi-supervised framework for classifying COVID-19 from chest X-rays.
Our framework introduces an optimisation model for graph diffusion that reinforces the natural relation among the tiny labelled set and the vast unlabelled data.
We demonstrate, through our experiments, that our model is able to outperform the current leading supervised model with a tiny fraction of the labelled examples.
arXiv Detail & Related papers (2020-09-30T15:38:24Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.