Automated Labelling using an Attention model for Radiology reports of
MRI scans (ALARM)
- URL: http://arxiv.org/abs/2002.06588v1
- Date: Sun, 16 Feb 2020 15:04:52 GMT
- Title: Automated Labelling using an Attention model for Radiology reports of
MRI scans (ALARM)
- Authors: David A. Wood, Jeremy Lynch, Sina Kafiabadi, Emily Guilhem, Aisha Al
Busaidi, Antanas Montvila, Thomas Varsavsky, Juveria Siddiqui, Naveen Gadapa,
Matthew Townend, Martin Kiik, Keena Patel, Gareth Barker, Sebastian Ourselin,
James H. Cole, Thomas C. Booth
- Abstract summary: We present a transformer-based network for magnetic resonance imaging (MRI) radiology report classification.
Our model's performance is comparable to that of an expert radiologist, and better than that of an expert physician.
We make code available online for researchers to label their own MRI datasets for medical imaging applications.
- Score: 0.8163463207064016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Labelling large datasets for training high-capacity neural networks is a
major obstacle to the development of deep learning-based medical imaging
applications. Here we present a transformer-based network for magnetic
resonance imaging (MRI) radiology report classification which automates this
task by assigning image labels on the basis of free-text expert radiology
reports. Our model's performance is comparable to that of an expert
radiologist, and better than that of an expert physician, demonstrating the
feasibility of this approach. We make code available online for researchers to
label their own MRI datasets for medical imaging applications.
Related papers
- Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - fMRI-PTE: A Large-scale fMRI Pretrained Transformer Encoder for
Multi-Subject Brain Activity Decoding [54.17776744076334]
We propose fMRI-PTE, an innovative auto-encoder approach for fMRI pre-training.
Our approach involves transforming fMRI signals into unified 2D representations, ensuring consistency in dimensions and preserving brain activity patterns.
Our contributions encompass introducing fMRI-PTE, innovative data transformation, efficient training, a novel learning strategy, and the universal applicability of our approach.
arXiv Detail & Related papers (2023-11-01T07:24:22Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Cross-modal Memory Networks for Radiology Report Generation [30.13916304931662]
Cross-modal memory networks (CMN) are proposed to enhance the encoder-decoder framework for radiology report generation.
Our model is able to better align information from radiology images and texts so as to help generating more accurate reports in terms of clinical indicators.
arXiv Detail & Related papers (2022-04-28T02:32:53Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Generating Radiology Reports via Memory-driven Transformer [38.30011851429407]
We propose to generate radiology reports with memory-driven Transformer.
Experimental results on two prevailing radiology report datasets, IU X-Ray and MIMIC-CXR.
arXiv Detail & Related papers (2020-10-30T04:08:03Z) - Paying Per-label Attention for Multi-label Extraction from Radiology
Reports [1.9601378412924186]
We tackle the automated extraction of structured labels from head CT reports for imaging of suspected stroke patients.
We propose a set of 31 labels which correspond to radiographic findings and clinical impressions related to neurological abnormalities.
We are able to robustly extract many labels with a single model, classified according to the radiologist's reporting.
arXiv Detail & Related papers (2020-07-31T16:11:09Z) - Labelling imaging datasets on the basis of neuroradiology reports: a
validation study [0.3871995016053975]
We show that, in our experience, assigning binary labels to images from reports alone is highly accurate.
In contrast to the binary labels, however, the accuracy of more granular labelling is dependent on the category.
We also show that downstream model performance is reduced when labelling of training reports is performed by a non-specialist.
arXiv Detail & Related papers (2020-07-08T16:12:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.