Medical Report Generation Is A Multi-label Classification Problem
- URL: http://arxiv.org/abs/2409.00250v1
- Date: Fri, 30 Aug 2024 20:43:35 GMT
- Title: Medical Report Generation Is A Multi-label Classification Problem
- Authors: Yijian Fan, Zhenbang Yang, Rui Liu, Mingjie Li, Xiaojun Chang,
- Abstract summary: We propose rethinking medical report generation as a multi-label classification problem.
We introduce a novel report generation framework based on BLIP integrated with classified key nodes.
Our experiments demonstrate that leveraging key nodes can achieve state-of-the-art (SOTA) performance, surpassing existing approaches across two benchmark datasets.
- Score: 38.64929236412092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical report generation is a critical task in healthcare that involves the automatic creation of detailed and accurate descriptions from medical images. Traditionally, this task has been approached as a sequence generation problem, relying on vision-and-language techniques to generate coherent and contextually relevant reports. However, in this paper, we propose a novel perspective: rethinking medical report generation as a multi-label classification problem. By framing the task this way, we leverage the radiology nodes from the commonly used knowledge graph, which can be better captured through classification techniques. To verify our argument, we introduce a novel report generation framework based on BLIP integrated with classified key nodes, which allows for effective report generation with accurate classification of multiple key aspects within the medical images. This approach not only simplifies the report generation process but also significantly enhances performance metrics. Our extensive experiments demonstrate that leveraging key nodes can achieve state-of-the-art (SOTA) performance, surpassing existing approaches across two benchmark datasets. The results underscore the potential of re-envisioning traditional tasks with innovative methodologies, paving the way for more efficient and accurate medical report generation.
Related papers
- Resource-Efficient Medical Report Generation using Large Language Models [3.2627279988912194]
Medical report generation is the task of automatically writing radiology reports for chest X-ray images.
We propose a new framework leveraging vision-enabled Large Language Models (LLM) for the task of medical report generation.
arXiv Detail & Related papers (2024-10-21T05:08:18Z) - MedCycle: Unpaired Medical Report Generation via Cycle-Consistency [11.190146577567548]
We introduce an innovative approach that eliminates the need for consistent labeling schemas.
This approach is based on cycle-consistent mapping functions that transform image embeddings into report embeddings.
It outperforms state-of-the-art results in unpaired chest X-ray report generation, demonstrating improvements in both language and clinical metrics.
arXiv Detail & Related papers (2024-03-20T09:40:11Z) - Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning [65.54680361074882]
Eye-gaze Guided Multi-modal Alignment (EGMA) framework harnesses eye-gaze data for better alignment of medical visual and textual features.
We conduct downstream tasks of image classification and image-text retrieval on four medical datasets.
arXiv Detail & Related papers (2024-03-19T03:59:14Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - A Medical Semantic-Assisted Transformer for Radiographic Report
Generation [39.99216295697047]
We propose a memory-augmented sparse attention block to capture the higher-order interactions between the input fine-grained image features.
We also introduce a novel Medical Concepts Generation Network (MCGN) to predict fine-grained semantic concepts and incorporate them into the report generation process as guidance.
arXiv Detail & Related papers (2022-08-22T14:38:19Z) - A Self-Guided Framework for Radiology Report Generation [10.573538773141715]
A self-guided framework (SGF) is developed to generate medical reports with annotated disease labels.
SGF uses unsupervised and supervised deep learning methods to mimic the process of human learning and writing.
Our results highlight the capacity of the proposed framework to distinguish fined-grained visual details between words.
arXiv Detail & Related papers (2022-06-19T11:09:27Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Factored Attention and Embedding for Unstructured-view Topic-related
Ultrasound Report Generation [70.7778938191405]
We propose a novel factored attention and embedding model (termed FAE-Gen) for the unstructured-view topic-related ultrasound report generation.
The proposed FAE-Gen mainly consists of two modules, i.e., view-guided factored attention and topic-oriented factored embedding, which capture the homogeneous and heterogeneous morphological characteristic across different views.
arXiv Detail & Related papers (2022-03-12T15:24:03Z) - Variational Topic Inference for Chest X-Ray Report Generation [102.04931207504173]
Report generation for medical imaging promises to reduce workload and assist diagnosis in clinical practice.
Recent work has shown that deep learning models can successfully caption natural images.
We propose variational topic inference for automatic report generation.
arXiv Detail & Related papers (2021-07-15T13:34:38Z) - Improving Factual Completeness and Consistency of Image-to-Text
Radiology Report Generation [26.846912996765447]
We introduce two new simple rewards to encourage the generation of factually complete and consistent radiology reports.
We show that our system leads to generations that are more factually complete and consistent compared to the baselines.
arXiv Detail & Related papers (2020-10-20T05:42:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.