Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications
- URL: http://arxiv.org/abs/2112.08363v1
- Date: Tue, 14 Dec 2021 21:16:52 GMT
- Title: Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications
- Authors: Siyuan He, Pengcheng Xi, Ashkan Ebadi, Stephane Tremblay, Alexander
Wong
- Abstract summary: In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
- Score: 72.52228843498193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Effective representation learning is the key in improving model performance
for medical image analysis. In training deep learning models, a compromise
often must be made between performance and trust, both of which are essential
for medical applications. Moreover, models optimized with cross-entropy loss
tend to suffer from unwarranted overconfidence in the majority class and
over-cautiousness in the minority class. In this work, we integrate a new
surrogate loss with self-supervised learning for computer-aided screening of
COVID-19 patients using radiography images. In addition, we adopt a new
quantification score to measure a model's trustworthiness. Ablation study is
conducted for both the performance and the trust on feature learning methods
and loss functions. Comparisons show that leveraging the new surrogate loss on
self-supervised models can produce label-efficient networks that are both
high-performing and trustworthy.
Related papers
- Less is More: Selective Reduction of CT Data for Self-Supervised Pre-Training of Deep Learning Models with Contrastive Learning Improves Downstream Classification Performance [7.945551345449388]
Current findings indicate a strong potential for contrastive pre-training on medical images.
We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain.
We investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets.
arXiv Detail & Related papers (2024-10-18T15:08:05Z) - Improved Unet brain tumor image segmentation based on GSConv module and ECA attention mechanism [0.0]
An improved model of medical image segmentation for brain tumor is discussed, which is a deep learning algorithm based on U-Net architecture.
Based on the traditional U-Net, we introduce GSConv module and ECA attention mechanism to improve the performance of the model in medical image segmentation tasks.
arXiv Detail & Related papers (2024-09-20T16:35:19Z) - AI in the Loop -- Functionalizing Fold Performance Disagreement to
Monitor Automated Medical Image Segmentation Pipelines [0.0]
Methods for automatically flag poor performing-predictions are essential for safely implementing machine learning into clinical practice.
We present a readily adoptable method using sub-models trained on different dataset folds, where their disagreement serves as a surrogate for model confidence.
arXiv Detail & Related papers (2023-05-15T21:35:23Z) - Towards Trustworthy Healthcare AI: Attention-Based Feature Learning for
COVID-19 Screening With Chest Radiography [70.37371604119826]
Building AI models with trustworthiness is important especially in regulated areas such as healthcare.
Previous work uses convolutional neural networks as the backbone architecture, which has shown to be prone to over-caution and overconfidence in making decisions.
We propose a feature learning approach using Vision Transformers, which use an attention-based mechanism.
arXiv Detail & Related papers (2022-07-19T14:55:42Z) - On visual self-supervision and its effect on model robustness [9.313899406300644]
Self-supervision can indeed improve model robustness, however it turns out the devil is in the details.
Although self-supervised pre-training yields benefits in improving adversarial training, we observe no benefit in model robustness or accuracy if self-supervision is incorporated into adversarial training.
arXiv Detail & Related papers (2021-12-08T16:22:02Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Evaluating the Robustness of Self-Supervised Learning in Medical Imaging [57.20012795524752]
Self-supervision has demonstrated to be an effective learning strategy when training target tasks on small annotated data-sets.
We show that networks trained via self-supervised learning have superior robustness and generalizability compared to fully-supervised learning in the context of medical imaging.
arXiv Detail & Related papers (2021-05-14T17:49:52Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.