Detailed Annotations of Chest X-Rays via CT Projection for Report
Understanding
- URL: http://arxiv.org/abs/2210.03416v1
- Date: Fri, 7 Oct 2022 09:21:48 GMT
- Title: Detailed Annotations of Chest X-Rays via CT Projection for Report
Understanding
- Authors: Constantin Seibold, Simon Rei{\ss}, Saquib Sarfraz, Matthias A. Fink,
Victoria Mayer, Jan Sellner, Moon Sung Kim, Klaus H. Maier-Hein, Jens
Kleesiek and Rainer Stiefelhagen
- Abstract summary: In clinical radiology reports, doctors capture important information about the patient's health status.
They convey their observations from raw medical imaging data about the inner structures of a patient.
This explicit grasp on both the patient's anatomy and their appearance is missing in current medical image-processing systems.
- Score: 16.5295886999348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In clinical radiology reports, doctors capture important information about
the patient's health status. They convey their observations from raw medical
imaging data about the inner structures of a patient. As such, formulating
reports requires medical experts to possess wide-ranging knowledge about
anatomical regions with their normal, healthy appearance as well as the ability
to recognize abnormalities. This explicit grasp on both the patient's anatomy
and their appearance is missing in current medical image-processing systems as
annotations are especially difficult to gather. This renders the models to be
narrow experts e.g. for identifying specific diseases. In this work, we recover
this missing link by adding human anatomy into the mix and enable the
association of content in medical reports to their occurrence in associated
imagery (medical phrase grounding). To exploit anatomical structures in this
scenario, we present a sophisticated automatic pipeline to gather and integrate
human bodily structures from computed tomography datasets, which we incorporate
in our PAXRay: A Projected dataset for the segmentation of Anatomical
structures in X-Ray data. Our evaluation shows that methods that take advantage
of anatomical information benefit heavily in visually grounding radiologists'
findings, as our anatomical segmentations allow for up to absolute 50% better
grounding results on the OpenI dataset as compared to commonly used region
proposals. The PAXRay dataset is available at
https://constantinseibold.github.io/paxray/.
Related papers
- Anatomy-guided Pathology Segmentation [56.883822515800205]
We develop a generalist segmentation model that combines anatomical and pathological information, aiming to enhance the segmentation accuracy of pathological features.
Our Anatomy-Pathology Exchange (APEx) training utilizes a query-based segmentation transformer which decodes a joint feature space into query-representations for human anatomy.
In doing so, we are able to report the best results across the board on FDG-PET-CT and Chest X-Ray pathology segmentation tasks with a margin of up to 3.3% as compared to strong baseline methods.
arXiv Detail & Related papers (2024-07-08T11:44:15Z) - Grounded Knowledge-Enhanced Medical VLP for Chest X-Ray [12.239249676716247]
Medical vision-language pre-training has emerged as a promising approach for learning domain-general representations of medical image and text.
We propose a grounded knowledge-enhanced medical vision-language pre-training framework for chest X-ray.
Our results show the advantage of incorporating grounding mechanism to remove biases and improve the alignment between chest X-ray image and radiology report.
arXiv Detail & Related papers (2024-04-23T05:16:24Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Beyond Images: An Integrative Multi-modal Approach to Chest X-Ray Report
Generation [47.250147322130545]
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images.
Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists.
We present a novel multi-modal deep neural network framework for generating chest X-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes.
arXiv Detail & Related papers (2023-11-18T14:37:53Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Self adaptive global-local feature enhancement for radiology report
generation [10.958641951927817]
We propose a novel framework AGFNet to dynamically fuse the global and anatomy region feature to generate multi-grained radiology report.
Firstly, we extract important anatomy region features and global features of input Chest X-ray (CXR)
Then, with the region features and the global features as input, our proposed self-adaptive fusion gate module could dynamically fuse multi-granularity information.
Finally, the captioning generator generates the radiology reports through multi-granularity features.
arXiv Detail & Related papers (2022-11-21T11:50:42Z) - Improving Radiology Summarization with Radiograph and Anatomy Prompts [60.30659124918211]
We propose a novel anatomy-enhanced multimodal model to promote impression generation.
In detail, we first construct a set of rules to extract anatomies and put these prompts into each sentence to highlight anatomy characteristics.
We utilize a contrastive learning module to align these two representations at the overall level and use a co-attention to fuse them at the sentence level.
arXiv Detail & Related papers (2022-10-15T14:05:03Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Human Treelike Tubular Structure Segmentation: A Comprehensive Review
and Future Perspectives [8.103169967374944]
structures in human physiology follow a treelike morphology, which often expresses complexity at very fine scales.
Large collections of 2D and 3D images have been made available by medical imaging modalities.
Analysis of structure provides insights into disease diagnosis, treatment planning, and prognosis.
arXiv Detail & Related papers (2022-07-12T17:01:42Z) - SQUID: Deep Feature In-Painting for Unsupervised Anomaly Detection [76.01333073259677]
We propose the use of Space-aware Memory Queues for In-painting and Detecting anomalies from radiography images (abbreviated as SQUID)
We show that SQUID can taxonomize the ingrained anatomical structures into recurrent patterns; and in the inference, it can identify anomalies (unseen/modified patterns) in the image.
arXiv Detail & Related papers (2021-11-26T13:47:34Z) - Extracting Radiological Findings With Normalized Anatomical Information
Using a Span-Based BERT Relation Extraction Model [0.20999222360659603]
Medical imaging reports distill the findings and observations of radiologists.
Large-scale use of this text-encoded information requires converting the unstructured text to a structured, semantic representation.
We explore the extraction and normalization of anatomical information in radiology reports that is associated with radiological findings.
arXiv Detail & Related papers (2021-08-20T15:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.