Dia-LLaMA: Towards Large Language Model-driven CT Report Generation
- URL: http://arxiv.org/abs/2403.16386v1
- Date: Mon, 25 Mar 2024 03:02:51 GMT
- Title: Dia-LLaMA: Towards Large Language Model-driven CT Report Generation
- Authors: Zhixuan Chen, Luyang Luo, Yequan Bie, Hao Chen,
- Abstract summary: We propose Dia-LLaMA, a framework to adapt the LLaMA2-7B for CT report generation by incorporating diagnostic information as guidance prompts.
Considering the high dimension of CT, we leverage a pre-trained ViT3D with perceiver to extract the visual information.
To tailor the LLM for report generation and emphasize abnormality, we extract additional diagnostic information by referring to a disease prototype memory bank.
- Score: 4.634780391920529
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical report generation has achieved remarkable advancements yet has still been faced with several challenges. First, the inherent imbalance in the distribution of normal and abnormal cases may lead models to exhibit a biased focus on normal samples, resulting in unreliable diagnoses. Second, the frequent occurrence of common template sentences in the reports may overwhelm the critical abnormal information. Moreover, existing works focus on 2D chest X-rays, leaving CT report generation underexplored due to the high-dimensional nature of CT images and the limited availability of CT-report pairs. Recently, LLM has shown a great ability to generate reliable answers with appropriate prompts, which shed light on addressing the aforementioned challenges. In this paper, we propose Dia-LLaMA, a framework to adapt the LLaMA2-7B for CT report generation by incorporating diagnostic information as guidance prompts. Considering the high dimension of CT, we leverage a pre-trained ViT3D with perceiver to extract the visual information. To tailor the LLM for report generation and emphasize abnormality, we extract additional diagnostic information by referring to a disease prototype memory bank, which is updated during training to capture common disease representations. Furthermore, we introduce disease-aware attention to enable the model to adjust attention for different diseases. Experiments on the chest CT dataset demonstrated that our proposed method outperformed previous methods and achieved state-of-the-art on both clinical efficacy performance and natural language generation metrics. The code will be made publically available.
Related papers
- A Clinically-Grounded Two-Stage Framework for Renal CT Report Generation [2.988064755409503]
We propose a two-stage framework for generating renal radiology reports from 2D CT slices.<n>First, we extract structured abnormality features using a multi-task learning model trained to identify lesion attributes.<n>These extracted features are combined with the corresponding CT image and fed into a fine-tuned vision-language model to generate natural language report sentences.
arXiv Detail & Related papers (2025-06-30T07:45:02Z) - Abn-BLIP: Abnormality-aligned Bootstrapping Language-Image Pre-training for Pulmonary Embolism Diagnosis and Report Generation from CTPA [3.1001390303501153]
Abn-BLIP is an advanced diagnosis model designed to align abnormal findings to generate the accuracy and comprehensiveness of radiology reports.
Our experiments show that Abn-BLIP outperforms state-of-the-art medical vision-language models and 3D report generation methods in both accuracy and clinical relevance.
arXiv Detail & Related papers (2025-03-03T20:13:39Z) - Enhanced Contrastive Learning with Multi-view Longitudinal Data for Chest X-ray Report Generation [15.257119888131609]
We propose enhanced contrastive learning with Multi-view Longitudinal data to facilitate chest X-ray Report Generation, named MLRG.
Specifically, we introduce a multi-view longitudinal contrast learning method that integrates spatial information from current multi-view images and temporal information from longitudinal data.
We present a tokenized absence encoding technique to handle missing patient-specific prior knowledge, allowing the model to produce more accurate radiology reports based on available prior knowledge.
arXiv Detail & Related papers (2025-02-27T12:59:04Z) - HC-LLM: Historical-Constrained Large Language Models for Radiology Report Generation [89.3260120072177]
We propose a novel Historical-Constrained Large Language Models (HC-LLM) framework for Radiology report generation.
Our approach extracts both time-shared and time-specific features from longitudinal chest X-rays and diagnostic reports to capture disease progression.
Notably, our approach performs well even without historical data during testing and can be easily adapted to other multimodal large models.
arXiv Detail & Related papers (2024-12-15T06:04:16Z) - 3D-CT-GPT: Generating 3D Radiology Reports through Integration of Large Vision-Language Models [51.855377054763345]
This paper introduces 3D-CT-GPT, a Visual Question Answering (VQA)-based medical visual language model for generating radiology reports from 3D CT scans.
Experiments on both public and private datasets demonstrate that 3D-CT-GPT significantly outperforms existing methods in terms of report accuracy and quality.
arXiv Detail & Related papers (2024-09-28T12:31:07Z) - CT-AGRG: Automated Abnormality-Guided Report Generation from 3D Chest CT Volumes [0.0]
Existing methods typically generate entire reports directly from 3D CT images, without explicitly focusing on observed abnormalities.
We propose a new anomaly-guided report generation model, which first predicts abnormalities and then generates targeted descriptions for each.
arXiv Detail & Related papers (2024-08-21T19:36:27Z) - Beyond the Eye: A Relational Model for Early Dementia Detection Using Retinal OCTA Images [42.75763279888966]
We present a novel PolarNet+ that uses retinal optical coherence tomography angiography ( OCTA) to discriminate early-onset Alzheimer's disease (AD) and mild cognitive impairment (MCI) subjects from controls.
Our method first maps OCTA images from Cartesian coordinates to polar coordinates, allowing approximate sub-region calculation.
We then introduce a multi-view module to serialize and analyze the images along three dimensions for comprehensive, clinically useful information extraction.
arXiv Detail & Related papers (2024-08-09T15:10:34Z) - RadGenome-Chest CT: A Grounded Vision-Language Dataset for Chest CT Analysis [56.57177181778517]
RadGenome-Chest CT is a large-scale, region-guided 3D chest CT interpretation dataset based on CT-RATE.
We leverage the latest powerful universal segmentation and large language models to extend the original datasets.
arXiv Detail & Related papers (2024-04-25T17:11:37Z) - High-Fidelity Image Synthesis from Pulmonary Nodule Lesion Maps using
Semantic Diffusion Model [10.412300404240751]
Lung cancer has been one of the leading causes of cancer-related deaths worldwide for years.
Deep learning, computer-assisted diagnosis (CAD) models based on learning algorithms can accelerate the screening process.
However, developing robust and accurate models often requires large-scale and diverse medical datasets with high-quality annotations.
arXiv Detail & Related papers (2023-05-02T01:04:22Z) - Cross-Modal Causal Intervention for Medical Report Generation [109.83549148448469]
Medical report generation (MRG) is essential for computer-aided diagnosis and medication guidance.
Due to the spurious correlations within image-text data induced by visual and linguistic biases, it is challenging to generate accurate reports reliably describing lesion areas.
We propose a novel Visual-Linguistic Causal Intervention (VLCI) framework for MRG, which consists of a visual deconfounding module (VDM) and a linguistic deconfounding module (LDM)
arXiv Detail & Related papers (2023-03-16T07:23:55Z) - AlignTransformer: Hierarchical Alignment of Visual Regions and Disease
Tags for Medical Report Generation [50.21065317817769]
We propose an AlignTransformer framework, which includes the Align Hierarchical Attention (AHA) and the Multi-Grained Transformer (MGT) modules.
Experiments on the public IU-Xray and MIMIC-CXR datasets show that the AlignTransformer can achieve results competitive with state-of-the-art methods on the two datasets.
arXiv Detail & Related papers (2022-03-18T13:43:53Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - Contrastive Attention for Automatic Chest X-ray Report Generation [124.60087367316531]
In most cases, the normal regions dominate the entire chest X-ray image, and the corresponding descriptions of these normal regions dominate the final report.
We propose Contrastive Attention (CA) model, which compares the current input image with normal images to distill the contrastive information.
We achieve the state-of-the-art results on the two public datasets.
arXiv Detail & Related papers (2021-06-13T11:20:31Z) - Learning Visual-Semantic Embeddings for Reporting Abnormal Findings on
Chest X-rays [6.686095511538683]
This work focuses on reporting abnormal findings on radiology images.
We propose a method to identify abnormal findings from the reports in addition to grouping them with unsupervised clustering and minimal rules.
We demonstrate that our method is able to retrieve abnormal findings and outperforms existing generation models on both clinical correctness and text generation metrics.
arXiv Detail & Related papers (2020-10-06T04:18:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.