K-Diag: Knowledge-enhanced Disease Diagnosis in Radiographic Imaging
- URL: http://arxiv.org/abs/2302.11557v1
- Date: Wed, 22 Feb 2023 18:53:57 GMT
- Title: K-Diag: Knowledge-enhanced Disease Diagnosis in Radiographic Imaging
- Authors: Chaoyi Wu, Xiaoman Zhang, Yanfeng Wang, Ya Zhang, Weidi Xie
- Abstract summary: We propose a knowledge-enhanced framework, that enables training visual representation with the guidance of medical domain knowledge.
First, to explicitly incorporate experts' knowledge, we propose to learn a neural representation for the medical knowledge graph.
Second, while training the visual encoder, we keep the parameters of the knowledge encoder frozen and propose to learn a set of prompt vectors for efficient adaptation.
- Score: 40.52487429030841
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider the problem of disease diagnosis. Unlike the
conventional learning paradigm that treats labels independently, we propose a
knowledge-enhanced framework, that enables training visual representation with
the guidance of medical domain knowledge. In particular, we make the following
contributions: First, to explicitly incorporate experts' knowledge, we propose
to learn a neural representation for the medical knowledge graph via
contrastive learning, implicitly establishing relations between different
medical concepts. Second, while training the visual encoder, we keep the
parameters of the knowledge encoder frozen and propose to learn a set of prompt
vectors for efficient adaptation. Third, we adopt a Transformer-based
disease-query module for cross-model fusion, which naturally enables
explainable diagnosis results via cross attention. To validate the
effectiveness of our proposed framework, we conduct thorough experiments on
three x-ray imaging datasets across different anatomy structures, showing our
model is able to exploit the implicit relations between diseases/findings, thus
is beneficial to the commonly encountered problem in the medical domain,
namely, long-tailed and zero-shot recognition, which conventional methods
either struggle or completely fail to realize.
Related papers
- Aligning Human Knowledge with Visual Concepts Towards Explainable Medical Image Classification [8.382606243533942]
We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis.
By leveraging a pretrained vision-language model, Explicd injects these criteria into the embedding space as knowledge anchors.
The final diagnostic outcome is determined based on the similarity scores between the encoded visual concepts and the textual criteria embeddings.
arXiv Detail & Related papers (2024-06-08T23:23:28Z) - Knowledge-enhanced Visual-Language Pretraining for Computational Pathology [68.6831438330526]
We consider the problem of visual representation learning for computational pathology, by exploiting large-scale image-text pairs gathered from public resources.
We curate a pathology knowledge tree that consists of 50,470 informative attributes for 4,718 diseases requiring pathology diagnosis from 32 human tissues.
arXiv Detail & Related papers (2024-04-15T17:11:25Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
Image-Concept Alignment [4.861768967055006]
We propose a multi-modal explainable disease diagnosis framework that meticulously aligns medical images and clinical-related concepts semantically at multiple strata.
Our method, while preserving model interpretability, attains high performance and label efficiency for concept detection and disease diagnosis.
arXiv Detail & Related papers (2024-01-16T17:45:01Z) - Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance [49.87607548975686]
The scarcity of labeled data for related diseases poses a huge challenge to an accurate diagnosis.
We propose a novel deep reinforcement learning framework, which introduces prior knowledge to direct the learning of diagnostic agents.
Our approach's performance was demonstrated using the well-known NIHX-ray 14 and CheXpert datasets.
arXiv Detail & Related papers (2023-06-02T01:46:31Z) - Deep grading for MRI-based differential diagnosis of Alzheimer's disease
and Frontotemporal dementia [0.0]
Alzheimer's disease and Frontotemporal dementia are common forms of neurodegenerative dementia.
Current structural imaging methods mainly focus on the detection of each disease but rarely on their differential diagnosis.
We propose a deep learning based approach for both problems of disease detection and differential diagnosis.
arXiv Detail & Related papers (2022-11-25T13:25:18Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Cross Chest Graph for Disease Diagnosis with Structural Relational
Reasoning [2.7148274921314615]
Locating lesions is important in the computer-aided diagnosis of X-ray images.
General weakly-supervised methods have failed to consider the characteristics of X-ray images.
We propose the Cross-chest Graph (CCG), which improves the performance of automatic lesion detection.
arXiv Detail & Related papers (2021-01-22T08:24:04Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.