Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and
Report Dictation for AI Development
- URL: http://arxiv.org/abs/2009.07386v3
- Date: Thu, 8 Oct 2020 05:54:40 GMT
- Title: Creation and Validation of a Chest X-Ray Dataset with Eye-tracking and
Report Dictation for AI Development
- Authors: Alexandros Karargyris, Satyananda Kashyap, Ismini Lourentzou, Joy Wu,
Arjun Sharma, Matthew Tong, Shafiq Abedin, David Beymer, Vandana Mukherjee,
Elizabeth A Krupinski, Mehdi Moradi
- Abstract summary: We developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence.
The data were collected using an eye tracking system while a radiologist reviewed and reported on 1,083 CXR images.
- Score: 47.1152650685625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We developed a rich dataset of Chest X-Ray (CXR) images to assist
investigators in artificial intelligence. The data were collected using an eye
tracking system while a radiologist reviewed and reported on 1,083 CXR images.
The dataset contains the following aligned data: CXR image, transcribed
radiology report text, radiologist's dictation audio and eye gaze coordinates
data. We hope this dataset can contribute to various areas of research
particularly towards explainable and multimodal deep learning / machine
learning methods. Furthermore, investigators in disease classification and
localization, automated radiology report generation, and human-machine
interaction can benefit from these data. We report deep learning experiments
that utilize the attention maps produced by eye gaze dataset to show the
potential utility of this data.
Related papers
- Shadow and Light: Digitally Reconstructed Radiographs for Disease Classification [8.192975020366777]
DRR-RATE comprises of 50,188 frontal Digitally Reconstructed Radiographs (DRRs) from 21,304 unique patients.
Each image is paired with a corresponding radiology text report and binary labels for 18 pathology classes.
We demonstrate the applicability of DRR-RATE alongside existing large-scale chest X-ray resources, notably the CheXpert dataset and CheXnet model.
arXiv Detail & Related papers (2024-06-06T02:19:18Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - Using Multi-modal Data for Improving Generalizability and Explainability
of Disease Classification in Radiology [0.0]
Traditional datasets for the radiological diagnosis tend to only provide the radiology image alongside the radiology report.
This paper utilizes the recently published Eye-Gaze dataset to perform an exhaustive study on the impact on performance and explainability of deep learning (DL) classification.
We find that the best classification performance of X-ray images is achieved with a combination of radiology report free-text and radiology image, with the eye-gaze data providing no performance boost.
arXiv Detail & Related papers (2022-07-29T16:49:05Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - RadGraph: Extracting Clinical Entities and Relations from Radiology
Reports [6.419031003699479]
RadGraph is a dataset of entities and relations in full-text chest X-ray radiology reports.
Our dataset can facilitate a wide range of research in medical natural language processing, as well as computer vision and multi-modal learning when linked to chest radiographs.
arXiv Detail & Related papers (2021-06-28T08:24:23Z) - Exploring and Distilling Posterior and Prior Knowledge for Radiology
Report Generation [55.00308939833555]
The PPKED includes three modules: Posterior Knowledge Explorer (PoKE), Prior Knowledge Explorer (PrKE) and Multi-domain Knowledge Distiller (MKD)
PoKE explores the posterior knowledge, which provides explicit abnormal visual regions to alleviate visual data bias.
PrKE explores the prior knowledge from the prior medical knowledge graph (medical knowledge) and prior radiology reports (working experience) to alleviate textual data bias.
arXiv Detail & Related papers (2021-06-13T11:10:02Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.