Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation
- URL: http://arxiv.org/abs/2303.10323v1
- Date: Sat, 18 Mar 2023 03:53:43 GMT
- Title: Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation
- Authors: Mingjie Li, Bingqian Lin, Zicong Chen, Haokun Lin, Xiaodan Liang,
Xiaojun Chang
- Abstract summary: We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
- Score: 92.73584302508907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automatic radiology reporting has great clinical potential to relieve
radiologists from heavy workloads and improve diagnosis interpretation.
Recently, researchers have enhanced data-driven neural networks with medical
knowledge graphs to eliminate the severe visual and textual bias in this task.
The structures of such graphs are exploited by using the clinical dependencies
formed by the disease topic tags via general knowledge and usually do not
update during the training process. Consequently, the fixed graphs can not
guarantee the most appropriate scope of knowledge and limit the effectiveness.
To address the limitation, we propose a knowledge graph with Dynamic structure
and nodes to facilitate medical report generation with Contrastive Learning,
named DCL. In detail, the fundamental structure of our graph is pre-constructed
from general knowledge. Then we explore specific knowledge extracted from the
retrieved reports to add additional nodes or redefine their relations in a
bottom-up manner. Each image feature is integrated with its very own updated
graph before being fed into the decoder module for report generation. Finally,
this paper introduces Image-Report Contrastive and Image-Report Matching losses
to better represent visual features and textual information. Evaluated on
IU-Xray and MIMIC-CXR datasets, our DCL outperforms previous state-of-the-art
models on these two benchmarks.
Related papers
- Learning Generalized Medical Image Representations through Image-Graph Contrastive Pretraining [11.520404630575749]
We develop an Image-Graph Contrastive Learning framework that pairs chest X-rays with structured report knowledge graphs automatically extracted from radiology notes.
Our approach uniquely encodes the disconnected graph components via a relational graph convolution network and transformer attention.
arXiv Detail & Related papers (2024-05-15T12:27:38Z) - Attributed Abnormality Graph Embedding for Clinically Accurate X-Ray
Report Generation [7.118069629513661]
We introduce a novel fined-grained knowledge graph structure called an attributed abnormality graph (ATAG)
The ATAG consists of interconnected abnormality nodes and attribute nodes, allowing it to better capture the abnormality details.
We show that the proposed ATAG-based deep model outperforms the SOTA methods by a large margin and can improve the clinical accuracy of the generated reports.
arXiv Detail & Related papers (2022-07-04T05:32:00Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Graph Enhanced Contrastive Learning for Radiology Findings Summarization [25.377658879658306]
A section of a radiology report summarizes the most prominent observation from the findings.
We propose a unified framework for exploiting both extra knowledge and the original findings.
Key words and their relations can be extracted in an appropriate way to facilitate impression generation.
arXiv Detail & Related papers (2022-04-01T04:39:44Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z) - Weakly-supervised Graph Meta-learning for Few-shot Node Classification [53.36828125138149]
We propose a new graph meta-learning framework -- Graph Hallucination Networks (Meta-GHN)
Based on a new robustness-enhanced episodic training, Meta-GHN is meta-learned to hallucinate clean node representations from weakly-labeled data.
Extensive experiments demonstrate the superiority of Meta-GHN over existing graph meta-learning studies.
arXiv Detail & Related papers (2021-06-12T22:22:10Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - Latent-Graph Learning for Disease Prediction [44.26665239213658]
We show that it is possible to learn a single, optimal graph towards the GCN's downstream task of disease classification.
Unlike commonly employed spectral GCN approaches, our GCN is spatial and inductive, and can thus infer previously unseen patients as well.
arXiv Detail & Related papers (2020-03-27T08:18:01Z) - Dynamic Graph Correlation Learning for Disease Diagnosis with Incomplete
Labels [66.57101219176275]
Disease diagnosis on chest X-ray images is a challenging multi-label classification task.
We propose a Disease Diagnosis Graph Convolutional Network (DD-GCN) that presents a novel view of investigating the inter-dependency among different diseases.
Our method is the first to build a graph over the feature maps with a dynamic adjacency matrix for correlation learning.
arXiv Detail & Related papers (2020-02-26T17:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.