Variational Knowledge Distillation for Disease Classification in Chest
X-Rays
- URL: http://arxiv.org/abs/2103.10825v1
- Date: Fri, 19 Mar 2021 14:13:56 GMT
- Title: Variational Knowledge Distillation for Disease Classification in Chest
X-Rays
- Authors: Tom van Sonsbeek, Xiantong Zhen, Marcel Worring and Ling Shao
- Abstract summary: We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
- Score: 102.04931207504173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disease classification relying solely on imaging data attracts great interest
in medical image analysis. Current models could be further improved, however,
by also employing Electronic Health Records (EHRs), which contain rich
information on patients and findings from clinicians. It is challenging to
incorporate this information into disease classification due to the high
reliance on clinician input in EHRs, limiting the possibility for automated
diagnosis. In this paper, we propose \textit{variational knowledge
distillation} (VKD), which is a new probabilistic inference framework for
disease classification based on X-rays that leverages knowledge from EHRs.
Specifically, we introduce a conditional latent variable model, where we infer
the latent representation of the X-ray image with the variational posterior
conditioning on the associated EHR text. By doing so, the model acquires the
ability to extract the visual features relevant to the disease during learning
and can therefore perform more accurate classification for unseen patients at
inference based solely on their X-ray scans. We demonstrate the effectiveness
of our method on three public benchmark datasets with paired X-ray images and
EHRs. The results show that the proposed variational knowledge distillation can
consistently improve the performance of medical image classification and
significantly surpasses current methods.
Related papers
- Unsupervised Machine Learning for Osteoporosis Diagnosis Using Singh Index Clustering on Hip Radiographs [0.0]
Singh Index (SI) provides a straightforward, semi-quantitative means of osteoporosis diagnosis through plain hip radiographs.
This study aims to automate SI identification from radiographs using machine learning algorithms.
arXiv Detail & Related papers (2024-11-22T08:44:43Z) - Expert Uncertainty and Severity Aware Chest X-Ray Classification by
Multi-Relationship Graph Learning [48.29204631769816]
We re-extract disease labels from CXR reports to make them more realistic by considering disease severity and uncertainty in classification.
Our experimental results show that models considering disease severity and uncertainty outperform previous state-of-the-art methods.
arXiv Detail & Related papers (2023-09-06T19:19:41Z) - DINO-CXR: A self supervised method based on vision transformer for chest
X-ray classification [0.9883261192383611]
We propose a self-supervised method, DINO-CXR, which is a novel adaptation of a self-supervised method, DINO, based on a vision transformer for chest X-ray classification.
A comparative analysis is performed to show the effectiveness of the proposed method for both pneumonia and COVID-19 detection.
arXiv Detail & Related papers (2023-08-01T11:58:49Z) - Improving Chest X-Ray Classification by RNN-based Patient Monitoring [0.34998703934432673]
We analyze how information about diagnosis can improve CNN-based image classification models.
We show that a model trained on additional patient history information outperforms a model trained without the information by a significant margin.
arXiv Detail & Related papers (2022-10-28T11:47:15Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Potential Features of ICU Admission in X-ray Images of COVID-19 Patients [8.83608410540057]
This paper presents an original methodology for extracting semantic features that correlate to severity from a data set with patient ICU admission labels.
The methodology employs a neural network trained to recognise lung pathologies to extract the semantic features.
The method has shown to be capable of selecting images for the learned features, which could translate some information about their common locations in the lung.
arXiv Detail & Related papers (2020-09-26T13:48:39Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.