When Radiology Report Generation Meets Knowledge Graph
- URL: http://arxiv.org/abs/2002.08277v1
- Date: Wed, 19 Feb 2020 16:39:42 GMT
- Title: When Radiology Report Generation Meets Knowledge Graph
- Authors: Yixiao Zhang, Xiaosong Wang, Ziyue Xu, Qihang Yu, Alan Yuille, Daguang
Xu
- Abstract summary: The accuracy of positive disease keyword mentions is critical in radiology image reporting.
The evaluation of reporting quality should focus more on matching the disease keywords and their associated attributes.
We propose a new evaluation metric for radiology image reporting with the assistance of the same composed graph.
- Score: 17.59749125131158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic radiology report generation has been an attracting research problem
towards computer-aided diagnosis to alleviate the workload of doctors in recent
years. Deep learning techniques for natural image captioning are successfully
adapted to generating radiology reports. However, radiology image reporting is
different from the natural image captioning task in two aspects: 1) the
accuracy of positive disease keyword mentions is critical in radiology image
reporting in comparison to the equivalent importance of every single word in a
natural image caption; 2) the evaluation of reporting quality should focus more
on matching the disease keywords and their associated attributes instead of
counting the occurrence of N-gram. Based on these concerns, we propose to
utilize a pre-constructed graph embedding module (modeled with a graph
convolutional neural network) on multiple disease findings to assist the
generation of reports in this work. The incorporation of knowledge graph allows
for dedicated feature learning for each disease finding and the relationship
modeling between them. In addition, we proposed a new evaluation metric for
radiology image reporting with the assistance of the same composed graph.
Experimental results demonstrate the superior performance of the methods
integrated with the proposed graph embedding module on a publicly accessible
dataset (IU-RR) of chest radiographs compared with previous approaches using
both the conventional evaluation metrics commonly adopted for image captioning
and our proposed ones.
Related papers
- Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Reading Radiology Imaging Like The Radiologist [3.218449686637963]
We design a factual consistency captioning generator to generate more accurate and factually consistent disease descriptions.
Our framework can find most similar reports for a given disease from the CXR database by retrieving a disease-oriented mask.
arXiv Detail & Related papers (2023-07-12T05:36:47Z) - ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning [9.316999438459794]
We propose an observation-guided radiology report generation framework (ORGAN)
It first produces an observation plan and then feeds both the plan and radiographs for report generation.
Our framework outperforms previous state-of-the-art methods regarding text quality and clinical efficacy.
arXiv Detail & Related papers (2023-06-10T15:36:04Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Radiology Report Generation with a Learned Knowledge Base and
Multi-modal Alignment [27.111857943935725]
We present an automatic, multi-modal approach for report generation from chest x-ray.
Our approach features two distinct modules: (i) Learned knowledge base and (ii) Multi-modal alignment.
With the aid of both modules, our approach clearly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-12-30T10:43:56Z) - Word Graph Guided Summarization for Radiology Findings [24.790502861602075]
We propose a novel method for automatic impression generation, where a word graph is constructed from the findings to record the critical words and their relations.
A Word Graph guided Summarization model (WGSum) is designed to generate impressions with the help of the word graph.
Experimental results on two datasets, OpenI and MIMIC-CXR, confirm the validity and effectiveness of our proposed approach.
arXiv Detail & Related papers (2021-12-18T13:20:18Z) - Variational Topic Inference for Chest X-Ray Report Generation [102.04931207504173]
Report generation for medical imaging promises to reduce workload and assist diagnosis in clinical practice.
Recent work has shown that deep learning models can successfully caption natural images.
We propose variational topic inference for automatic report generation.
arXiv Detail & Related papers (2021-07-15T13:34:38Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.