A Relational-learning Perspective to Multi-label Chest X-ray
Classification
- URL: http://arxiv.org/abs/2103.06220v1
- Date: Wed, 10 Mar 2021 17:44:59 GMT
- Title: A Relational-learning Perspective to Multi-label Chest X-ray
Classification
- Authors: Anjany Sekuboyina, Daniel O\~noro-Rubio, Jens Kleesiek and Brandon
Malone
- Abstract summary: Multi-label classification of chest X-ray images is frequently performed using discriminative approaches.
We propose a novel knowledge graph reformulation of multi-label classification.
- Score: 1.4489463428855132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-label classification of chest X-ray images is frequently performed
using discriminative approaches, i.e. learning to map an image directly to its
binary labels. Such approaches make it challenging to incorporate auxiliary
information such as annotation uncertainty or a dependency among the labels.
Building towards this, we propose a novel knowledge graph reformulation of
multi-label classification, which not only readily increases predictive
performance of an encoder but also serves as a general framework for
introducing new domain knowledge.
Specifically, we construct a multi-modal knowledge graph out of the chest
X-ray images and its labels and pose multi-label classification as a link
prediction problem. Incorporating auxiliary information can then simply be
achieved by adding additional nodes and relations among them. When tested on a
publicly-available radiograph dataset (CheXpert), our relational-reformulation
using a naive knowledge graph outperforms the state-of-art by achieving an
area-under-ROC curve of 83.5%, an improvement of "sim 1" over a purely
discriminative approach.
Related papers
- Learning Generalized Medical Image Representations through Image-Graph Contrastive Pretraining [11.520404630575749]
We develop an Image-Graph Contrastive Learning framework that pairs chest X-rays with structured report knowledge graphs automatically extracted from radiology notes.
Our approach uniquely encodes the disconnected graph components via a relational graph convolution network and transformer attention.
arXiv Detail & Related papers (2024-05-15T12:27:38Z) - Graph Attention Transformer Network for Multi-Label Image Classification [50.0297353509294]
We propose a general framework for multi-label image classification that can effectively mine complex inter-label relationships.
Our proposed methods can achieve state-of-the-art performance on three datasets.
arXiv Detail & Related papers (2022-03-08T12:39:05Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Learning Image Labels On-the-fly for Training Robust Classification
Models [13.669654965671604]
We show how noisy annotations (e.g., from different algorithm-based labelers) can be utilized together and mutually benefit the learning of classification tasks.
A meta-training based label-sampling module is designed to attend the labels that benefit the model learning the most through additional back-propagation processes.
arXiv Detail & Related papers (2020-09-22T05:38:44Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Hierarchical Image Classification using Entailment Cone Embeddings [68.82490011036263]
We first inject label-hierarchy knowledge into an arbitrary CNN-based classifier.
We empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance.
arXiv Detail & Related papers (2020-04-02T10:22:02Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.