Controllable Chest X-Ray Report Generation from Longitudinal
Representations
- URL: http://arxiv.org/abs/2310.05881v1
- Date: Mon, 9 Oct 2023 17:22:58 GMT
- Title: Controllable Chest X-Ray Report Generation from Longitudinal
Representations
- Authors: Francesco Dalla Serra, Chaoyang Wang, Fani Deligianni, Jeffrey Dalton,
Alison Q O'Neil
- Abstract summary: One strategy to speed up reporting is to integrate automated reporting systems.
Previous approaches to automated radiology reporting generally do not provide the prior study as input.
We introduce two novel aspects: (1) longitudinal learning -- we propose a method to align, leverage the current and prior scan information into a joint longitudinal representation which can be provided to the multimodal report generation model; (2) sentence-anatomy dropout -- a training strategy for controllability in which the report generator model is trained to predict only sentences from the original report which correspond to the subset of anatomical regions given as input.
- Score: 13.151444796296868
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Radiology reports are detailed text descriptions of the content of medical
scans. Each report describes the presence/absence and location of relevant
clinical findings, commonly including comparison with prior exams of the same
patient to describe how they evolved. Radiology reporting is a time-consuming
process, and scan results are often subject to delays. One strategy to speed up
reporting is to integrate automated reporting systems, however clinical
deployment requires high accuracy and interpretability. Previous approaches to
automated radiology reporting generally do not provide the prior study as
input, precluding comparison which is required for clinical accuracy in some
types of scans, and offer only unreliable methods of interpretability.
Therefore, leveraging an existing visual input format of anatomical tokens, we
introduce two novel aspects: (1) longitudinal representation learning -- we
input the prior scan as an additional input, proposing a method to align,
concatenate and fuse the current and prior visual information into a joint
longitudinal representation which can be provided to the multimodal report
generation model; (2) sentence-anatomy dropout -- a training strategy for
controllability in which the report generator model is trained to predict only
sentences from the original report which correspond to the subset of anatomical
regions given as input. We show through in-depth experiments on the MIMIC-CXR
dataset how the proposed approach achieves state-of-the-art results while
enabling anatomy-wise controllable report generation.
Related papers
- Structural Entities Extraction and Patient Indications Incorporation for Chest X-ray Report Generation [10.46031380503486]
We introduce a novel method, textbfStructural textbfEntities extraction and patient indications textbfIncorporation (SEI) for chest X-ray report generation.
We employ a structural entities extraction (SEE) approach to eliminate presentation-style vocabulary in reports.
We propose a cross-modal fusion network to integrate information from X-ray images, similar historical cases, and patient-specific indications.
arXiv Detail & Related papers (2024-05-23T01:29:47Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Improving Radiology Summarization with Radiograph and Anatomy Prompts [60.30659124918211]
We propose a novel anatomy-enhanced multimodal model to promote impression generation.
In detail, we first construct a set of rules to extract anatomies and put these prompts into each sentence to highlight anatomy characteristics.
We utilize a contrastive learning module to align these two representations at the overall level and use a co-attention to fuse them at the sentence level.
arXiv Detail & Related papers (2022-10-15T14:05:03Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - FlexR: Few-shot Classification with Language Embeddings for Structured
Reporting of Chest X-rays [37.15474283789249]
We propose a method to predict clinical findings defined by sentences in structured reporting templates.
The approach involves training a contrastive language-image model using chest X-rays and related free-text radiological reports.
Results show that even with limited image-level annotations for training, the method can accomplish the structured reporting tasks of severity assessment of cardiomegaly and localizing pathologies in chest X-rays.
arXiv Detail & Related papers (2022-03-29T16:31:39Z) - Learning Semi-Structured Representations of Radiology Reports [10.134080761449093]
Given a corpus of radiology reports, researchers are often interested in identifying a subset of reports describing a particular medical finding.
Recent studies proposed mapping free-text statements in radiology reports to semi-structured strings of terms taken from a limited vocabulary.
This paper aims to present an approach for the automatic generation of semi-structured representations of radiology reports.
arXiv Detail & Related papers (2021-12-20T18:53:41Z) - Weakly Supervised Contrastive Learning for Chest X-Ray Report Generation [3.3978173451092437]
Radiology report generation aims at generating descriptive text from radiology images automatically.
A typical setting consists of training encoder-decoder models on image-report pairs with a cross entropy loss.
We propose a novel weakly supervised contrastive loss for medical report generation.
arXiv Detail & Related papers (2021-09-25T00:06:23Z) - MIMO: Mutual Integration of Patient Journey and Medical Ontology for
Healthcare Representation Learning [49.57261599776167]
We propose an end-to-end robust Transformer-based solution, Mutual Integration of patient journey and Medical Ontology (MIMO) for healthcare representation learning and predictive analytics.
arXiv Detail & Related papers (2021-07-20T07:04:52Z) - BiteNet: Bidirectional Temporal Encoder Network to Predict Medical
Outcomes [53.163089893876645]
We propose a novel self-attention mechanism that captures the contextual dependency and temporal relationships within a patient's healthcare journey.
An end-to-end bidirectional temporal encoder network (BiteNet) then learns representations of the patient's journeys.
We have evaluated the effectiveness of our methods on two supervised prediction and two unsupervised clustering tasks with a real-world EHR dataset.
arXiv Detail & Related papers (2020-09-24T00:42:36Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - Show, Describe and Conclude: On Exploiting the Structure Information of
Chest X-Ray Reports [5.6070625920019825]
Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis.
The complex structures between and within sections of the reports pose a great challenge to the automatic report generation.
We propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports.
arXiv Detail & Related papers (2020-04-26T02:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.