Paying Per-label Attention for Multi-label Extraction from Radiology
Reports
- URL: http://arxiv.org/abs/2007.16152v3
- Date: Fri, 7 Aug 2020 17:08:51 GMT
- Title: Paying Per-label Attention for Multi-label Extraction from Radiology
Reports
- Authors: Patrick Schrempf, Hannah Watson, Shadia Mikhael, Maciej Pajak,
Mat\'u\v{s} Falis, Aneta Lisowska, Keith W. Muir, David Harris-Birtill,
Alison Q. O'Neil
- Abstract summary: We tackle the automated extraction of structured labels from head CT reports for imaging of suspected stroke patients.
We propose a set of 31 labels which correspond to radiographic findings and clinical impressions related to neurological abnormalities.
We are able to robustly extract many labels with a single model, classified according to the radiologist's reporting.
- Score: 1.9601378412924186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training medical image analysis models requires large amounts of expertly
annotated data which is time-consuming and expensive to obtain. Images are
often accompanied by free-text radiology reports which are a rich source of
information. In this paper, we tackle the automated extraction of structured
labels from head CT reports for imaging of suspected stroke patients, using
deep learning. Firstly, we propose a set of 31 labels which correspond to
radiographic findings (e.g. hyperdensity) and clinical impressions (e.g.
haemorrhage) related to neurological abnormalities. Secondly, inspired by
previous work, we extend existing state-of-the-art neural network models with a
label-dependent attention mechanism. Using this mechanism and simple synthetic
data augmentation, we are able to robustly extract many labels with a single
model, classified according to the radiologist's reporting (positive,
uncertain, negative). This approach can be used in further research to
effectively extract many labels from medical text.
Related papers
- Automated Spinal MRI Labelling from Reports Using a Large Language Model [45.348320669329205]
We propose a pipeline to automate the extraction of labels from radiology reports using large language models.
Our method equals or surpasses GPT-4 on a held-out set of reports.
We show that the extracted labels can be used to train imaging models to classify the identified conditions in the accompanying MR scans.
arXiv Detail & Related papers (2024-10-22T17:54:07Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Replace and Report: NLP Assisted Radiology Report Generation [31.309987297324845]
We propose a template-based approach to generate radiology reports from radiographs.
This is the first attempt to generate chest X-ray radiology reports by first creating small sentences for abnormal findings and then replacing them in the normal report template.
arXiv Detail & Related papers (2023-06-19T10:04:42Z) - Automated Labeling of German Chest X-Ray Radiology Reports using Deep
Learning [50.591267188664666]
We propose a deep learning-based CheXpert label prediction model, pre-trained on reports labeled by a rule-based German CheXpert model.
Our results demonstrate the effectiveness of our approach, which significantly outperformed the rule-based model on all three tasks.
arXiv Detail & Related papers (2023-06-09T16:08:35Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Extracting and Learning Fine-Grained Labels from Chest Radiographs [0.157292030677369]
We focus on extracting and learning fine-grained labels for chest X-ray images.
A total of 457 fine-grained labels depicting the largest spectrum of findings to date were selected.
We show results that indicate a highly accurate label extraction process and a reliable learning of fine-grained labels.
arXiv Detail & Related papers (2020-11-18T19:56:08Z) - Labelling imaging datasets on the basis of neuroradiology reports: a
validation study [0.3871995016053975]
We show that, in our experience, assigning binary labels to images from reports alone is highly accurate.
In contrast to the binary labels, however, the accuracy of more granular labelling is dependent on the category.
We also show that downstream model performance is reduced when labelling of training reports is performed by a non-specialist.
arXiv Detail & Related papers (2020-07-08T16:12:10Z) - Auxiliary Signal-Guided Knowledge Encoder-Decoder for Medical Report
Generation [107.3538598876467]
We propose an Auxiliary Signal-Guided Knowledge-Decoder (ASGK) to mimic radiologists' working patterns.
ASGK integrates internal visual feature fusion and external medical linguistic information to guide medical knowledge transfer and learning.
arXiv Detail & Related papers (2020-06-06T01:00:15Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z) - Automated Labelling using an Attention model for Radiology reports of
MRI scans (ALARM) [0.8163463207064016]
We present a transformer-based network for magnetic resonance imaging (MRI) radiology report classification.
Our model's performance is comparable to that of an expert radiologist, and better than that of an expert physician.
We make code available online for researchers to label their own MRI datasets for medical imaging applications.
arXiv Detail & Related papers (2020-02-16T15:04:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.