Effectively Fine-tune to Improve Large Multimodal Models for Radiology
Report Generation
- URL: http://arxiv.org/abs/2312.01504v1
- Date: Sun, 3 Dec 2023 20:42:38 GMT
- Title: Effectively Fine-tune to Improve Large Multimodal Models for Radiology
Report Generation
- Authors: Yuzhe Lu, Sungmin Hong, Yash Shah, Panpan Xu
- Abstract summary: Large Language Models (LLM) have demonstrated impressive capabilities recently.
We propose a simple yet effective two-stage fine-tuning protocol to align visual features to LLM's text embedding space as soft visual prompts.
Our framework with OpenLLaMA-7B achieved state-of-the-art level performance without domain-specific pretraining.
- Score: 8.788649244412591
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Writing radiology reports from medical images requires a high level of domain
expertise. It is time-consuming even for trained radiologists and can be
error-prone for inexperienced radiologists. It would be appealing to automate
this task by leveraging generative AI, which has shown drastic progress in
vision and language understanding. In particular, Large Language Models (LLM)
have demonstrated impressive capabilities recently and continued to set new
state-of-the-art performance on almost all natural language tasks. While many
have proposed architectures to combine vision models with LLMs for multimodal
tasks, few have explored practical fine-tuning strategies. In this work, we
proposed a simple yet effective two-stage fine-tuning protocol to align visual
features to LLM's text embedding space as soft visual prompts. Our framework
with OpenLLaMA-7B achieved state-of-the-art level performance without
domain-specific pretraining. Moreover, we provide detailed analyses of soft
visual prompts and attention mechanisms, shedding light on future research
directions.
Related papers
- RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs [56.391404083287235]
We introduce Cambrian-1, a family of multimodal LLMs (MLLMs) designed with a vision-centric approach.
Our study uses LLMs and visual instruction tuning as an interface to evaluate various visual representations.
We provide model weights, code, supporting tools, datasets, and detailed instruction-tuning and evaluation recipes.
arXiv Detail & Related papers (2024-06-24T17:59:42Z) - Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want [58.091825321168514]
We introduce the Draw-and-Understand project: a new model, a multi-domain dataset, and a challenging benchmark for visual prompting.
Specifically, we propose a new end-to-end trained Multimodal Large Language Model (MLLM) that connects a vision encoder, a visual prompt encoder and an LLM.
To advance visual prompting research for MLLMs, we introduce MDVP-Data and MDVP-Bench.
arXiv Detail & Related papers (2024-03-29T16:26:20Z) - Residual-based Language Models are Free Boosters for Biomedical Imaging [15.154015369984572]
In this study, we uncover the unexpected efficacy of residual-based large language models (LLMs) as part of encoders for biomedical imaging tasks.
We found that these LLMs could boost performance across a spectrum of biomedical imaging applications, including both 2D and 3D visual classification tasks.
As a byproduct, we found that the proposed framework achieved superior performance, setting new state-of-the-art results on extensive, standardized datasets in MedMNIST-2D and 3D.
arXiv Detail & Related papers (2024-03-26T03:05:20Z) - MedXChat: A Unified Multimodal Large Language Model Framework towards CXRs Understanding and Generation [28.497591315598402]
Multimodal Large Language Models (MLLMs) have shown success in various general image processing tasks.
This study investigates the potential of MLLMs in improving the understanding and generation of Chest X-Rays (CXRs)
arXiv Detail & Related papers (2023-12-04T06:40:12Z) - XrayGPT: Chest Radiographs Summarization using Medical Vision-Language
Models [60.437091462613544]
We introduce XrayGPT, a novel conversational medical vision-language model.
It can analyze and answer open-ended questions about chest radiographs.
We generate 217k interactive and high-quality summaries from free-text radiology reports.
arXiv Detail & Related papers (2023-06-13T17:59:59Z) - An Iterative Optimizing Framework for Radiology Report Summarization with ChatGPT [80.33783969507458]
The 'Impression' section of a radiology report is a critical basis for communication between radiologists and other physicians.
Recent studies have achieved promising results in automatic impression generation using large-scale medical text data.
These models often require substantial amounts of medical text data and have poor generalization performance.
arXiv Detail & Related papers (2023-04-17T17:13:42Z) - mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal
Skip-connections [104.14624185375897]
mPLUG is a new vision-language foundation model for both cross-modal understanding and generation.
It achieves state-of-the-art results on a wide range of vision-language downstream tasks, such as image captioning, image-text retrieval, visual grounding and visual question answering.
arXiv Detail & Related papers (2022-05-24T11:52:06Z) - Multi-modal Understanding and Generation for Medical Images and Text via
Vision-Language Pre-Training [5.119201893752376]
We propose Medical Vision Language Learner (MedViLL) which adopts a Transformer-based architecture combined with a novel multimodal attention masking scheme.
We empirically demonstrate the superior downstream task performance of MedViLL against various baselines including task-specific architectures.
arXiv Detail & Related papers (2021-05-24T15:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.