OXnet: Omni-supervised Thoracic Disease Detection from Chest X-rays
- URL: http://arxiv.org/abs/2104.03218v1
- Date: Wed, 7 Apr 2021 16:12:31 GMT
- Title: OXnet: Omni-supervised Thoracic Disease Detection from Chest X-rays
- Authors: Luyang Luo, Hao Chen, Yanning Zhou, Huangjing Lin, Pheng-Ann Pheng
- Abstract summary: OXnet is the first deep omni-supervised thoracic disease detection network.
It uses as much available supervision as possible for CXR diagnosis.
It outperforms competitive methods with significant margins.
- Score: 7.810011959069686
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Chest X-ray (CXR) is the most typical medical image worldwide to examine
various thoracic diseases. Automatically localizing lesions from CXR is a
promising way to alleviate radiologists' daily reading burden. However, CXR
datasets often have numerous image-level annotations and scarce lesion-level
annotations, and more often, without annotations. Thus far, unifying different
supervision granularities to develop thoracic disease detection algorithms has
not been comprehensively addressed. In this paper, we present OXnet, the first
deep omni-supervised thoracic disease detection network to our best knowledge
that uses as much available supervision as possible for CXR diagnosis. Besides
fully supervised learning, to enable learning from weakly-annotated data, we
guide the information from a global classification branch to the lesion
localization branch by a dual attention alignment module. To further enhance
global information learning, we impose intra-class compactness and inter-class
separability with a global prototype alignment module. For unsupervised data
learning, we extend the focal loss to be its soft form to distill knowledge
from a teacher model. Extensive experiments show the proposed OXnet outperforms
competitive methods with significant margins. Further, we investigate
omni-supervision under various annotation granularities and corroborate OXnet
is a promising choice to mitigate the plight of annotation shortage for medical
image diagnosis.
Related papers
- MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning [6.4136876268620115]
MLVICX is an approach to capture rich representations in the form of embeddings from chest X-ray images.
We demonstrate the performance of MLVICX in advancing self-supervised chest X-ray representation learning.
arXiv Detail & Related papers (2024-03-18T06:19:37Z) - I-AI: A Controllable & Interpretable AI System for Decoding
Radiologists' Intense Focus for Accurate CXR Diagnoses [9.260958560874812]
Interpretable Artificial Intelligence (I-AI) is a novel and unified controllable interpretable pipeline.
Our I-AI addresses three key questions: where a radiologist looks, how long they focus on specific areas, and what findings they diagnose.
arXiv Detail & Related papers (2023-09-24T04:48:44Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest
X-rays [17.15666977702355]
We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address weak annotation issues.
Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations.
Our results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization.
arXiv Detail & Related papers (2022-06-25T18:33:27Z) - Breaking with Fixed Set Pathology Recognition through Report-Guided
Contrastive Training [23.506879497561712]
We employ a contrastive global-local dual-encoder architecture to learn concepts directly from unstructured medical reports.
We evaluate our approach on the large-scale chest X-Ray datasets MIMIC-CXR, CheXpert, and ChestX-Ray14 for disease classification.
arXiv Detail & Related papers (2022-05-14T21:44:05Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z) - Deep Mining External Imperfect Data for Chest X-ray Disease Screening [57.40329813850719]
We argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges.
We formulate the multi-label disease classification problem as weighted independent binary tasks according to the categories.
Our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability.
arXiv Detail & Related papers (2020-06-06T06:48:40Z) - Localization of Critical Findings in Chest X-Ray without Local
Annotations Using Multi-Instance Learning [0.0]
deep learning models commonly suffer from a lack of explainability.
Deep learning models require locally annotated training data in form of pixel level labels or bounding box coordinates.
In this work, we address these shortcomings with an interpretable DL algorithm based on multi-instance learning.
arXiv Detail & Related papers (2020-01-23T21:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.