A Self-Guided Framework for Radiology Report Generation
- URL: http://arxiv.org/abs/2206.09378v1
- Date: Sun, 19 Jun 2022 11:09:27 GMT
- Title: A Self-Guided Framework for Radiology Report Generation
- Authors: Jun Li, Shibo Li, Ying Hu, Huiren Tao
- Abstract summary: A self-guided framework (SGF) is developed to generate medical reports with annotated disease labels.
SGF uses unsupervised and supervised deep learning methods to mimic the process of human learning and writing.
Our results highlight the capacity of the proposed framework to distinguish fined-grained visual details between words.
- Score: 10.573538773141715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic radiology report generation is essential to computer-aided
diagnosis. Through the success of image captioning, medical report generation
has been achievable. However, the lack of annotated disease labels is still the
bottleneck of this area. In addition, the image-text data bias problem and
complex sentences make it more difficult to generate accurate reports. To
address these gaps, we pre-sent a self-guided framework (SGF), a suite of
unsupervised and supervised deep learning methods to mimic the process of human
learning and writing. In detail, our framework obtains the domain knowledge
from medical reports with-out extra disease labels and guides itself to extract
fined-grain visual features as-sociated with the text. Moreover, SGF
successfully improves the accuracy and length of medical report generation by
incorporating a similarity comparison mechanism that imitates the process of
human self-improvement through compar-ative practice. Extensive experiments
demonstrate the utility of our SGF in the majority of cases, showing its
superior performance over state-of-the-art meth-ods. Our results highlight the
capacity of the proposed framework to distinguish fined-grained visual details
between words and verify its advantage in generating medical reports.
Related papers
- Resource-Efficient Medical Report Generation using Large Language Models [3.2627279988912194]
Medical report generation is the task of automatically writing radiology reports for chest X-ray images.
We propose a new framework leveraging vision-enabled Large Language Models (LLM) for the task of medical report generation.
arXiv Detail & Related papers (2024-10-21T05:08:18Z) - Medical Report Generation Is A Multi-label Classification Problem [38.64929236412092]
We propose rethinking medical report generation as a multi-label classification problem.
We introduce a novel report generation framework based on BLIP integrated with classified key nodes.
Our experiments demonstrate that leveraging key nodes can achieve state-of-the-art (SOTA) performance, surpassing existing approaches across two benchmark datasets.
arXiv Detail & Related papers (2024-08-30T20:43:35Z) - AutoRG-Brain: Grounded Report Generation for Brain MRI [57.22149878985624]
Radiologists are tasked with interpreting a large number of images in a daily base, with the responsibility of generating corresponding reports.
This demanding workload elevates the risk of human error, potentially leading to treatment delays, increased healthcare costs, revenue loss, and operational inefficiencies.
We initiate a series of work on grounded Automatic Report Generation (AutoRG)
This system supports the delineation of brain structures, the localization of anomalies, and the generation of well-organized findings.
arXiv Detail & Related papers (2024-07-23T17:50:00Z) - Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - Reading Radiology Imaging Like The Radiologist [3.218449686637963]
We design a factual consistency captioning generator to generate more accurate and factually consistent disease descriptions.
Our framework can find most similar reports for a given disease from the CXR database by retrieving a disease-oriented mask.
arXiv Detail & Related papers (2023-07-12T05:36:47Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Cross-Modal Causal Intervention for Medical Report Generation [109.83549148448469]
Medical report generation (MRG) is essential for computer-aided diagnosis and medication guidance.
Due to the spurious correlations within image-text data induced by visual and linguistic biases, it is challenging to generate accurate reports reliably describing lesion areas.
We propose a novel Visual-Linguistic Causal Intervention (VLCI) framework for MRG, which consists of a visual deconfounding module (VDM) and a linguistic deconfounding module (LDM)
arXiv Detail & Related papers (2023-03-16T07:23:55Z) - Weakly Supervised Contrastive Learning for Chest X-Ray Report Generation [3.3978173451092437]
Radiology report generation aims at generating descriptive text from radiology images automatically.
A typical setting consists of training encoder-decoder models on image-report pairs with a cross entropy loss.
We propose a novel weakly supervised contrastive loss for medical report generation.
arXiv Detail & Related papers (2021-09-25T00:06:23Z) - Variational Topic Inference for Chest X-Ray Report Generation [102.04931207504173]
Report generation for medical imaging promises to reduce workload and assist diagnosis in clinical practice.
Recent work has shown that deep learning models can successfully caption natural images.
We propose variational topic inference for automatic report generation.
arXiv Detail & Related papers (2021-07-15T13:34:38Z) - Unifying Relational Sentence Generation and Retrieval for Medical Image
Report Composition [142.42920413017163]
Current methods often generate the most common sentences due to dataset bias for individual case.
We propose a novel framework that unifies template retrieval and sentence generation to handle both common and rare abnormality.
arXiv Detail & Related papers (2021-01-09T04:33:27Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.