Localization supervision of chest x-ray classifiers using label-specific
eye-tracking annotation
- URL: http://arxiv.org/abs/2207.09771v1
- Date: Wed, 20 Jul 2022 09:26:29 GMT
- Title: Localization supervision of chest x-ray classifiers using label-specific
eye-tracking annotation
- Authors: Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Tolga Tasdizen
- Abstract summary: Eye-tracking (ET) data can be collected in a non-intrusive way during the clinical workflow of a radiologist.
We use ET data recorded from radiologists while dictating CXR reports to train CNNs.
We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of abnormalities.
- Score: 4.8035104863603575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have been successfully applied to chest
x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to
improve the interpretability of a CNN in terms of localizing abnormalities.
However, only a few relatively small CXR datasets containing bounding boxes are
available, and collecting them is very costly. Opportunely, eye-tracking (ET)
data can be collected in a non-intrusive way during the clinical workflow of a
radiologist. We use ET data recorded from radiologists while dictating CXR
reports to train CNNs. We extract snippets from the ET data by associating them
with the dictation of keywords and use them to supervise the localization of
abnormalities. We show that this method improves a model's interpretability
without impacting its image-level classification.
Related papers
- OOOE: Only-One-Object-Exists Assumption to Find Very Small Objects in
Chest Radiographs [9.226276232505734]
Many foreign objects like tubes and various anatomical structures are small in comparison to the entire chest X-ray.
We present a simple yet effective Only-One-Object-Exists' (OOOE) assumption to improve the deep network's ability to localize small landmarks in chest radiographs.
arXiv Detail & Related papers (2022-10-13T07:37:33Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Anatomy-Guided Weakly-Supervised Abnormality Localization in Chest
X-rays [17.15666977702355]
We propose an Anatomy-Guided chest X-ray Network (AGXNet) to address weak annotation issues.
Our framework consists of a cascade of two networks, one responsible for identifying anatomical abnormalities and the second responsible for pathological observations.
Our results on the MIMIC-CXR dataset demonstrate the effectiveness of AGXNet in disease and anatomical abnormality localization.
arXiv Detail & Related papers (2022-06-25T18:33:27Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - REFLACX, a dataset of reports and eye-tracking data for localization of
abnormalities in chest x-rays [22.548782080717096]
We propose a method for collecting implicit localization data using an eye tracker to capture gaze locations and a microphone to capture a dictation of a report.
The resulting REFLACX dataset was labeled by five radiologists and contains 3,032 synchronized sets of eye-tracking data and timestamped report transcriptions.
arXiv Detail & Related papers (2021-09-29T04:14:16Z) - RIDnet: Radiologist-Inspired Deep Neural Network for Low-dose CT
Denoising [10.101822678034393]
Low-dose computed tomography (LDCT) has been widely adopted in the early screening of lung cancer and COVID-19.
LDCT images inevitably suffer from the degradation problem caused by complex noises.
We propose a novel deep learning model named radiologist-inspired deep denoising network (RIDnet) to imitate the workflow of a radiologist reading LDCT images.
arXiv Detail & Related papers (2021-05-15T05:59:01Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.