Explainability Guided Multi-Site COVID-19 CT Classification
- URL: http://arxiv.org/abs/2103.13677v1
- Date: Thu, 25 Mar 2021 08:56:08 GMT
- Title: Explainability Guided Multi-Site COVID-19 CT Classification
- Authors: Ameen Ali, Tal Shaharabany, Lior Wolf
- Abstract summary: The limited number of supervised positive cases, the lack of region-based supervision, and the variability across acquisition sites are addressed.
Compared to the current state of the art, we obtain an increase of five percent in the F1 score on a site with a relatively high number of cases, and a gap twice as large for a site with much fewer training images.
- Score: 79.4957965474334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radiologist examination of chest CT is an effective way for screening
COVID-19 cases. In this work, we overcome three challenges in the automation of
this process: (i) the limited number of supervised positive cases, (ii) the
lack of region-based supervision, and (iii) the variability across acquisition
sites. These challenges are met by incorporating a recent augmentation solution
called SnapMix, by a new patch embedding technique, and by performing a
test-time stability analysis. The three techniques are complementary and are
all based on utilizing the heatmaps produced by the Class Activation Mapping
(CAM) explainability method. Compared to the current state of the art, we
obtain an increase of five percent in the F1 score on a site with a relatively
high number of cases, and a gap twice as large for a site with much fewer
training images.
Related papers
- Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method [4.303291247305105]
We improve the supervised contrastive learning method by leveraging both image-level labels and domain-specific augmentations to enhance model robustness.
We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images.
This improvement corresponds to 93.63% absolute accuracy, highlighting the effectiveness of our approach in leveraging properties of data to learn more appropriate representation space.
arXiv Detail & Related papers (2024-05-06T17:06:11Z) - Bag of Tricks for Long-Tailed Multi-Label Classification on Chest X-Rays [40.11576642444264]
This report presents a brief description of our solution in the ICCV CVAMD 2023 CXR-LT Competition.
We empirically explored the effectiveness for CXR diagnosis with the integration of several advanced designs.
Our framework finally achieves 0.349 mAP on the competition test set, ranking in the top five.
arXiv Detail & Related papers (2023-08-17T08:25:55Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Placenta Segmentation in Ultrasound Imaging: Addressing Sources of
Uncertainty and Limited Field-of-View [12.271784950642344]
We propose a multi-task learning approach that combines the classification of placental location and semantic placenta segmentation in a single convolutional neural network.
Our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation.
arXiv Detail & Related papers (2022-06-29T16:18:55Z) - DLTTA: Dynamic Learning Rate for Test-time Adaptation on Cross-domain
Medical Images [56.72015587067494]
We propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA.
Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods.
arXiv Detail & Related papers (2022-05-27T02:34:32Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - A novel multiple instance learning framework for COVID-19 severity
assessment via data augmentation and self-supervised learning [64.90342559393275]
How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world.
We observe that there are two issues -- weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images.
Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.
arXiv Detail & Related papers (2021-02-07T16:30:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.