Attributed Abnormality Graph Embedding for Clinically Accurate X-Ray
Report Generation
- URL: http://arxiv.org/abs/2207.01208v2
- Date: Tue, 5 Jul 2022 06:47:49 GMT
- Title: Attributed Abnormality Graph Embedding for Clinically Accurate X-Ray
Report Generation
- Authors: Sixing Yan, William K. Cheung, Keith Chiu, Terence M. Tong, Charles K.
Cheung, Simon See
- Abstract summary: We introduce a novel fined-grained knowledge graph structure called an attributed abnormality graph (ATAG)
The ATAG consists of interconnected abnormality nodes and attribute nodes, allowing it to better capture the abnormality details.
We show that the proposed ATAG-based deep model outperforms the SOTA methods by a large margin and can improve the clinical accuracy of the generated reports.
- Score: 7.118069629513661
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic generation of medical reports from X-ray images can assist
radiologists to perform the time-consuming and yet important reporting task.
Yet, achieving clinically accurate generated reports remains challenging.
Modeling the underlying abnormalities using the knowledge graph approach has
been found promising in enhancing the clinical accuracy. In this paper, we
introduce a novel fined-grained knowledge graph structure called an attributed
abnormality graph (ATAG). The ATAG consists of interconnected abnormality nodes
and attribute nodes, allowing it to better capture the abnormality details. In
contrast to the existing methods where the abnormality graph was constructed
manually, we propose a methodology to automatically construct the fine-grained
graph structure based on annotations, medical reports in X-ray datasets, and
the RadLex radiology lexicon. We then learn the ATAG embedding using a deep
model with an encoder-decoder architecture for the report generation. In
particular, graph attention networks are explored to encode the relationships
among the abnormalities and their attributes. A gating mechanism is adopted and
integrated with various decoders for the generation. We carry out extensive
experiments based on the benchmark datasets, and show that the proposed
ATAG-based deep model outperforms the SOTA methods by a large margin and can
improve the clinical accuracy of the generated reports.
Related papers
- Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - ADA-GAD: Anomaly-Denoised Autoencoders for Graph Anomaly Detection [84.0718034981805]
We introduce a novel framework called Anomaly-Denoised Autoencoders for Graph Anomaly Detection (ADA-GAD)
In the first stage, we design a learning-free anomaly-denoised augmentation method to generate graphs with reduced anomaly levels.
In the next stage, the decoders are retrained for detection on the original graph.
arXiv Detail & Related papers (2023-12-22T09:02:01Z) - Dynamic Multi-Domain Knowledge Networks for Chest X-ray Report
Generation [0.5939858158928474]
We propose a Dynamic Multi-Domain Knowledge(DMDK) network for radiology diagnostic report generation.
The DMDK network consists of four modules: Chest Feature Extractor(CFE), Dynamic Knowledge Extractor(DKE), Specific Knowledge Extractor(SKE), and Multi-knowledge Integrator(MKI) module.
We performed extensive experiments on two widely used datasets, IU X-Ray and MIMIC-CXR.
arXiv Detail & Related papers (2023-10-08T11:20:02Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on
Chest X-rays [6.686095511538683]
This work focuses on reporting abnormal findings on radiology images.
We propose a method to identify abnormal findings from the reports in addition to grouping them with unsupervised clustering and minimal rules.
We demonstrate that our method is able to retrieve abnormal findings and outperforms existing generation models on both clinical correctness and text generation metrics.
arXiv Detail & Related papers (2020-10-06T04:18:18Z) - Chest X-ray Report Generation through Fine-Grained Label Learning [46.352966049776875]
We present a domain-aware automatic chest X-ray radiology report generation algorithm that learns fine-grained description of findings from images.
We also develop an automatic labeling algorithm for assigning such descriptors to images and build a novel deep learning network that recognizes both coarse and fine-grained descriptions of findings.
arXiv Detail & Related papers (2020-07-27T19:50:56Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.