Localization of Critical Findings in Chest X-Ray without Local
Annotations Using Multi-Instance Learning
- URL: http://arxiv.org/abs/2001.08817v1
- Date: Thu, 23 Jan 2020 21:29:14 GMT
- Title: Localization of Critical Findings in Chest X-Ray without Local
Annotations Using Multi-Instance Learning
- Authors: Evan Schwab, Andr\'e Goo{\ss}en, Hrishikesh Deshpande, Axel Saalbach
- Abstract summary: deep learning models commonly suffer from a lack of explainability.
Deep learning models require locally annotated training data in form of pixel level labels or bounding box coordinates.
In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The automatic detection of critical findings in chest X-rays (CXR), such as
pneumothorax, is important for assisting radiologists in their clinical
workflow like triaging time-sensitive cases and screening for incidental
findings. While deep learning (DL) models has become a promising predictive
technology with near-human accuracy, they commonly suffer from a lack of
explainability, which is an important aspect for clinical deployment of DL
models in the highly regulated healthcare industry. For example, localizing
critical findings in an image is useful for explaining the predictions of DL
classification algorithms. While there have been a host of joint classification
and localization methods for computer vision, the state-of-the-art DL models
require locally annotated training data in the form of pixel level labels or
bounding box coordinates. In the medical domain, this requires an expensive
amount of manual annotation by medical experts for each critical finding. This
requirement becomes a major barrier for training models that can rapidly scale
to various findings. In this work, we address these shortcomings with an
interpretable DL algorithm based on multi-instance learning that jointly
classifies and localizes critical findings in CXR without the need for local
annotations. We show competitive classification results on three different
critical findings (pneumothorax, pneumonia, and pulmonary edema) from three
different CXR datasets.
Related papers
- MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning [6.4136876268620115]
MLVICX is an approach to capture rich representations in the form of embeddings from chest X-ray images.
We demonstrate the performance of MLVICX in advancing self-supervised chest X-ray representation learning.
arXiv Detail & Related papers (2024-03-18T06:19:37Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - MDF-Net for abnormality detection by fusing X-rays with clinical data [14.347359031598813]
This study investigates the effects of including patients' clinical information on the performance of deep learning (DL) classifiers for disease location in chest X-rays.
We propose a novel architecture consisting of two fusion methods that enable the model to simultaneously process patients' clinical data and chest X-rays.
Results show that incorporating patients' clinical data in a DL model together with the proposed fusion methods improves the disease localization in chest X-rays by 12% in terms of Average Precision.
arXiv Detail & Related papers (2023-02-26T19:16:57Z) - DrasCLR: A Self-supervised Framework of Learning Disease-related and
Anatomy-specific Representation for 3D Medical Images [23.354686734545176]
We present a novel SSL framework, named DrasCLR, for 3D medical imaging.
We propose two domain-specific contrastive learning strategies: one aims to capture subtle disease patterns inside a local anatomical region, and the other aims to represent severe disease patterns that span larger regions.
arXiv Detail & Related papers (2023-02-21T01:32:27Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.