Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets
- URL: http://arxiv.org/abs/2008.04152v1
- Date: Tue, 4 Aug 2020 07:41:15 GMT
- Title: Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets
- Authors: Sandesh Ghimire, Satyananda Kashyap, Joy T. Wu, Alexandros Karargyris,
Mehdi Moradi
- Abstract summary: We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
- Score: 55.06983249986729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chest radiography is the most common medical image examination for screening
and diagnosis in hospitals. Automatic interpretation of chest X-rays at the
level of an entry-level radiologist can greatly benefit work prioritization and
assist in analyzing a larger population. Subsequently, several datasets and
deep learning-based solutions have been proposed to identify diseases based on
chest X-ray images. However, these methods are shown to be vulnerable to shift
in the source of data: a deep learning model performing well when tested on the
same dataset as training data, starts to perform poorly when it is tested on a
dataset from a different source. In this work, we address this challenge of
generalization to a new source by forcing the network to learn a
source-invariant representation. By employing an adversarial training strategy,
we show that a network can be forced to learn a source-invariant
representation. Through pneumonia-classification experiments on multi-source
chest X-ray datasets, we show that this algorithm helps in improving
classification accuracy on a new source of X-ray dataset.
Related papers
- Position-Guided Prompt Learning for Anomaly Detection in Chest X-Rays [46.78926066405227]
Anomaly detection in chest X-rays is a critical task.
Recently, CLIP-based methods, pre-trained on a large number of medical images, have shown impressive performance on zero/few-shot downstream tasks.
We propose a position-guided prompt learning method to adapt the task data to the frozen CLIP-based model.
arXiv Detail & Related papers (2024-05-20T12:11:41Z) - MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning [6.4136876268620115]
MLVICX is an approach to capture rich representations in the form of embeddings from chest X-ray images.
We demonstrate the performance of MLVICX in advancing self-supervised chest X-ray representation learning.
arXiv Detail & Related papers (2024-03-18T06:19:37Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - Improving Chest X-Ray Classification by RNN-based Patient Monitoring [0.34998703934432673]
We analyze how information about diagnosis can improve CNN-based image classification models.
We show that a model trained on additional patient history information outperforms a model trained without the information by a significant margin.
arXiv Detail & Related papers (2022-10-28T11:47:15Z) - Long-Tailed Classification of Thorax Diseases on Chest X-Ray: A New
Benchmark Study [75.05049024176584]
We present a benchmark study of the long-tailed learning problem in the specific domain of thorax diseases on chest X-rays.
We focus on learning from naturally distributed chest X-ray data, optimizing classification accuracy over not only the common "head" classes, but also the rare yet critical "tail" classes.
The benchmark consists of two chest X-ray datasets for 19- and 20-way thorax disease classification, containing classes with as many as 53,000 and as few as 7 labeled training images.
arXiv Detail & Related papers (2022-08-29T04:34:15Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - The pitfalls of using open data to develop deep learning solutions for
COVID-19 detection in chest X-rays [64.02097860085202]
Deep learning models have been developed to identify COVID-19 from chest X-rays.
Results have been exceptional when training and testing on open-source data.
Data analysis and model evaluations show that the popular open-source dataset COVIDx is not representative of the real clinical problem.
arXiv Detail & Related papers (2021-09-14T10:59:11Z) - Covid-19 Detection from Chest X-ray and Patient Metadata using Graph
Convolutional Neural Networks [6.420262246029286]
We propose a novel Graph Convolution Neural Network (GCN) that is capable of identifying bio-markers of Covid-19 pneumonia.
The proposed method exploits important relational knowledge between data instances and their features using graph representation and applies convolution to learn the graph data.
arXiv Detail & Related papers (2021-05-20T13:13:29Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.