An Adversarial Approach for the Robust Classification of Pneumonia from
Chest Radiographs
- URL: http://arxiv.org/abs/2001.04051v1
- Date: Mon, 13 Jan 2020 03:49:05 GMT
- Title: An Adversarial Approach for the Robust Classification of Pneumonia from
Chest Radiographs
- Authors: Joseph D. Janizek, Gabriel Erion, Alex J. DeGrave, Su-In Lee
- Abstract summary: Deep learning models often exhibit performance loss due to dataset shift.
Models trained using data from one hospital system achieve high predictive performance when tested on data from the same hospital, but perform significantly worse when tested in different hospital systems.
We propose an approach based on adversarial optimization, which allows us to learn more robust models that do not depend on confounders.
- Score: 9.462808515258464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning has shown promise in the domain of disease classification
from medical images, models based on state-of-the-art convolutional neural
network architectures often exhibit performance loss due to dataset shift.
Models trained using data from one hospital system achieve high predictive
performance when tested on data from the same hospital, but perform
significantly worse when they are tested in different hospital systems.
Furthermore, even within a given hospital system, deep learning models have
been shown to depend on hospital- and patient-level confounders rather than
meaningful pathology to make classifications. In order for these models to be
safely deployed, we would like to ensure that they do not use confounding
variables to make their classification, and that they will work well even when
tested on images from hospitals that were not included in the training data. We
attempt to address this problem in the context of pneumonia classification from
chest radiographs. We propose an approach based on adversarial optimization,
which allows us to learn more robust models that do not depend on confounders.
Specifically, we demonstrate improved out-of-hospital generalization
performance of a pneumonia classifier by training a model that is invariant to
the view position of chest radiographs (anterior-posterior vs.
posterior-anterior). Our approach leads to better predictive performance on
external hospital data than both a standard baseline and previously proposed
methods to handle confounding, and also suggests a method for identifying
models that may rely on confounders. Code available at
https://github.com/suinleelab/cxr_adv.
Related papers
- CROCODILE: Causality aids RObustness via COntrastive DIsentangled LEarning [8.975676404678374]
We introduce our CROCODILE framework, showing how tools from causality can foster a model's robustness to domain shift.
We apply our method to multi-label lung disease classification from CXRs, utilizing over 750000 images.
arXiv Detail & Related papers (2024-08-09T09:08:06Z) - Refining Tuberculosis Detection in CXR Imaging: Addressing Bias in Deep Neural Networks via Interpretability [1.9936075659851882]
We argue that the reliability of deep learning models is limited, even if they can be shown to obtain perfect classification accuracy on the test data.
We show that pre-training a deep neural network on a large-scale proxy task, as well as using mixed objective optimization network (MOON), can improve the alignment of decision foundations between models and experts.
arXiv Detail & Related papers (2024-07-19T06:41:31Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - The pitfalls of using open data to develop deep learning solutions for
COVID-19 detection in chest X-rays [64.02097860085202]
Deep learning models have been developed to identify COVID-19 from chest X-rays.
Results have been exceptional when training and testing on open-source data.
Data analysis and model evaluations show that the popular open-source dataset COVIDx is not representative of the real clinical problem.
arXiv Detail & Related papers (2021-09-14T10:59:11Z) - Detecting when pre-trained nnU-Net models fail silently for Covid-19
lung lesion segmentation [0.34940201626430645]
We propose a lightweight OOD detection method that exploits the Mahalanobis distance in the feature space.
We validate our method with a patch-based nnU-Net architecture trained with a multi-institutional dataset.
arXiv Detail & Related papers (2021-07-13T10:48:08Z) - Rethinking annotation granularity for overcoming deep shortcut learning:
A retrospective study on chest radiographs [43.43732218093039]
We compare a popular thoracic disease classification model, CheXNet, and a thoracic lesion detection model, CheXDet.
We found incorporating external training data even led to performance degradation for CheXNet.
By visualizing the models' decision-making regions, we revealed that CheXNet learned patterns other than the target lesions.
arXiv Detail & Related papers (2021-04-21T14:21:37Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - A Deep Learning Study on Osteosarcoma Detection from Histological Images [6.341765152919201]
The most common type of primary malignant bone tumor is osteosarcoma.
CNNs can significantly decrease surgeon's workload and make a better prognosis of patient conditions.
CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance.
arXiv Detail & Related papers (2020-11-02T18:16:17Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.