Prior-RadGraphFormer: A Prior-Knowledge-Enhanced Transformer for
Generating Radiology Graphs from X-Rays
- URL: http://arxiv.org/abs/2303.13818v3
- Date: Mon, 18 Sep 2023 09:07:00 GMT
- Title: Prior-RadGraphFormer: A Prior-Knowledge-Enhanced Transformer for
Generating Radiology Graphs from X-Rays
- Authors: Yiheng Xiong, Jingsong Liu, Kamilia Zaripova, Sahand Sharifzadeh,
Matthias Keicher, Nassir Navab
- Abstract summary: We propose Prior-RadGraphFormer to generate radiology graphs directly from chest X-ray (CXR) images.
The PKG models the statistical relationship between radiology entities, including anatomical structures and medical observations.
- Score: 38.37348230885927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The extraction of structured clinical information from free-text radiology
reports in the form of radiology graphs has been demonstrated to be a valuable
approach for evaluating the clinical correctness of report-generation methods.
However, the direct generation of radiology graphs from chest X-ray (CXR)
images has not been attempted. To address this gap, we propose a novel approach
called Prior-RadGraphFormer that utilizes a transformer model with prior
knowledge in the form of a probabilistic knowledge graph (PKG) to generate
radiology graphs directly from CXR images. The PKG models the statistical
relationship between radiology entities, including anatomical structures and
medical observations. This additional contextual information enhances the
accuracy of entity and relation extraction. The generated radiology graphs can
be applied to various downstream tasks, such as free-text or structured reports
generation and multi-label classification of pathologies. Our approach
represents a promising method for generating radiology graphs directly from CXR
images, and has significant potential for improving medical image analysis and
clinical decision-making.
Related papers
- FG-CXR: A Radiologist-Aligned Gaze Dataset for Enhancing Interpretability in Chest X-Ray Report Generation [9.374812942790953]
We introduce Fine-Grained CXR dataset, which provides fine-grained paired information between the captions generated by radiologists and the corresponding gaze attention heatmaps for each anatomy.
Our analysis reveals that simply applying black-box image captioning methods to generate reports cannot adequately explain which information in CXR is utilized.
We propose a novel explainable radiologist's attention generator network (Gen-XAI) that mimics the diagnosis process of radiologists, explicitly constraining its output to closely align with both radiologist's gaze attention and transcript.
arXiv Detail & Related papers (2024-11-23T02:22:40Z) - Uncovering Knowledge Gaps in Radiology Report Generation Models through Knowledge Graphs [18.025481751074214]
We introduce a system, named ReXKG, which extracts structured information from processed reports to construct a radiology knowledge graph.
We conduct an in-depth comparative analysis of AI-generated and human-written radiology reports, assessing the performance of both specialist and generalist models.
arXiv Detail & Related papers (2024-08-26T16:28:56Z) - Large Model driven Radiology Report Generation with Clinical Quality
Reinforcement Learning [16.849933628738277]
Radiology report generation (RRG) has attracted significant attention due to its potential to reduce the workload of radiologists.
This paper introduces a novel RRG method, textbfLM-RRG, that integrates large models (LMs) with clinical quality reinforcement learning.
Experiments on the MIMIC-CXR and IU-Xray datasets demonstrate the superiority of our method over the state of the art.
arXiv Detail & Related papers (2024-03-11T13:47:11Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Using Multi-modal Data for Improving Generalizability and Explainability
of Disease Classification in Radiology [0.0]
Traditional datasets for the radiological diagnosis tend to only provide the radiology image alongside the radiology report.
This paper utilizes the recently published Eye-Gaze dataset to perform an exhaustive study on the impact on performance and explainability of deep learning (DL) classification.
We find that the best classification performance of X-ray images is achieved with a combination of radiology report free-text and radiology image, with the eye-gaze data providing no performance boost.
arXiv Detail & Related papers (2022-07-29T16:49:05Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.