Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation
- URL: http://arxiv.org/abs/2106.06963v1
- Date: Sun, 13 Jun 2021 11:10:02 GMT
- Title: Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation
- Authors: Fenglin Liu, Xian Wu, Shen Ge, Wei Fan, Yuexian Zou
- Abstract summary: The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
- Score: 55.00308939833555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatically generating radiology reports can improve current clinical
practice in diagnostic radiology. On one hand, it can relieve radiologists from
the heavy burden of report writing; On the other hand, it can remind
radiologists of abnormalities and avoid the misdiagnosis and missed diagnosis.
Yet, this task remains a challenging job for data-driven neural networks, due
to the serious visual and textual data biases. To this end, we propose a
Posterior-and-Prior Knowledge Exploring-and-Distilling approach (PPKED) to
imitate the working patterns of radiologists, who will first examine the
abnormal regions and assign the disease topic tags to the abnormal regions, and
then rely on the years of prior medical knowledge and prior working experience
accumulations to write reports. Thus, the PPKED includes three modules:
Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and
Multi-domain Knowledge Distiller (MKD). In detail, PoKE explores the posterior
knowledge, which provides explicit abnormal visual regions to alleviate visual
data bias; PrKE explores the prior knowledge from the prior medical knowledge
graph (medical knowledge) and prior radiology reports (working experience) to
alleviate textual data bias. The explored knowledge is distilled by the MKD to
generate the final reports. Evaluated on MIMIC-CXR and IU-Xray datasets, our
method is able to outperform previous state-of-the-art models on these two
datasets.
Related papers
- AutoRG-Brain: Grounded Report Generation for Brain MRI [57.22149878985624]
Radiologists are tasked with interpreting a large number of images in a daily base, with the responsibility of generating corresponding reports.
This demanding workload elevates the risk of human error, potentially leading to treatment delays, increased healthcare costs, revenue loss, and operational inefficiencies.
We initiate a series of work on grounded Automatic Report Generation (AutoRG)
This system supports the delineation of brain structures, the localization of anomalies, and the generation of well-organized findings.
arXiv Detail & Related papers (2024-07-23T17:50:00Z) - Consensus, dissensus and synergy between clinicians and specialist
foundation models in radiology report generation [32.26270073540666]
The worldwide shortage of radiologists restricts access to expert care and imposes heavy workloads.
Recent progress in automated report generation with vision-language models offer clear potential in ameliorating the situation.
We build a state-of-the-art report generation system for chest radiographs, $textitFlamingo-CXR, by fine-tuning a well-known vision-language foundation model on radiology data.
arXiv Detail & Related papers (2023-11-30T05:38:34Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Dynamic Multi-Domain Knowledge Networks for Chest X-ray Report
Generation [0.5939858158928474]
We propose a Dynamic Multi-Domain Knowledge(DMDK) network for radiology diagnostic report generation.
The DMDK network consists of four modules: Chest Feature Extractor(CFE), Dynamic Knowledge Extractor(DKE), Specific Knowledge Extractor(SKE), and Multi-knowledge Integrator(MKI) module.
We performed extensive experiments on two widely used datasets, IU X-Ray and MIMIC-CXR.
arXiv Detail & Related papers (2023-10-08T11:20:02Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Improving Radiology Report Generation Systems by Removing Hallucinated
References to Non-existent Priors [1.1110995501996481]
We propose two methods to remove references to priors in radiology reports.
A GPT-3-based few-shot approach to rewrite medical reports without references to priors; and a BioBERT-based token classification approach to directly remove words referring to priors.
We find that our re-trained model--which we call CXR-ReDonE--outperforms previous report generation methods on clinical metrics, achieving an average BERTScore of 0.2351 (2.57% absolute improvement)
arXiv Detail & Related papers (2022-09-27T00:44:41Z) - Knowledge Matters: Radiology Report Generation with General and Specific
Knowledge [24.995748604459013]
We propose a knowledge-enhanced radiology report generation approach.
By merging the visual features of the radiology image with general knowledge and specific knowledge, the proposed model can improve the quality of generated reports.
arXiv Detail & Related papers (2021-12-30T10:36:04Z) - Generating Radiology Reports via Memory-driven Transformer [38.30011851429407]
We propose to generate radiology reports with memory-driven Transformer.
Experimental results on two prevailing radiology report datasets, IU X-Ray and MIMIC-CXR.
arXiv Detail & Related papers (2020-10-30T04:08:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.