VisualCheXbert: Addressing the Discrepancy Between Radiology Report
Labels and Image Labels
- URL: http://arxiv.org/abs/2102.11467v1
- Date: Tue, 23 Feb 2021 03:02:36 GMT
- Title: VisualCheXbert: Addressing the Discrepancy Between Radiology Report
Labels and Image Labels
- Authors: Saahil Jain, Akshay Smit, Steven QH Truong, Chanh DT Nguyen,
Minh-Thanh Huynh, Mudit Jain, Victoria A. Young, Andrew Y. Ng, Matthew P.
Lungren, Pranav Rajpurkar
- Abstract summary: We show that radiologists labeling reports significantly disagree with radiologists labeling chest X-ray images.
We develop and evaluate methods to produce labels from radiology reports that have better agreement with radiologists labeling images.
- Score: 4.865330207715854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic extraction of medical conditions from free-text radiology reports
is critical for supervising computer vision models to interpret medical images.
In this work, we show that radiologists labeling reports significantly disagree
with radiologists labeling corresponding chest X-ray images, which reduces the
quality of report labels as proxies for image labels. We develop and evaluate
methods to produce labels from radiology reports that have better agreement
with radiologists labeling images. Our best performing method, called
VisualCheXbert, uses a biomedically-pretrained BERT model to directly map from
a radiology report to the image labels, with a supervisory signal determined by
a computer vision model trained to detect medical conditions from chest X-ray
images. We find that VisualCheXbert outperforms an approach using an existing
radiology report labeler by an average F1 score of 0.14 (95% CI 0.12, 0.17). We
also find that VisualCheXbert better agrees with radiologists labeling chest
X-ray images than do radiologists labeling the corresponding radiology reports
by an average F1 score across several medical conditions of between 0.12 (95%
CI 0.09, 0.15) and 0.21 (95% CI 0.18, 0.24).
Related papers
- Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis [61.089776864520594]
We propose eye-tracking as an alternative to text reports for medical images.
By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning.
We introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - Replace and Report: NLP Assisted Radiology Report Generation [31.309987297324845]
We propose a template-based approach to generate radiology reports from radiographs.
This is the first attempt to generate chest X-ray radiology reports by first creating small sentences for abnormal findings and then replacing them in the normal report template.
arXiv Detail & Related papers (2023-06-19T10:04:42Z) - DeltaNet:Conditional Medical Report Generation for COVID-19 Diagnosis [54.93879264615525]
We propose DeltaNet to generate medical reports automatically.
DeltaNet employs three steps to generate a report.
We evaluate DeltaNet on a COVID-19 dataset, where DeltaNet outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-12T07:41:03Z) - Medical Image Captioning via Generative Pretrained Transformers [57.308920993032274]
We combine two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records.
The proposed model is tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO.
arXiv Detail & Related papers (2022-09-28T10:27:10Z) - Radiomics-Guided Global-Local Transformer for Weakly Supervised
Pathology Localization in Chest X-Rays [65.88435151891369]
Radiomics-Guided Transformer (RGT) fuses textitglobal image information with textitlocal knowledge-guided radiomics information.
RGT consists of an image Transformer branch, a radiomics Transformer branch, and fusion layers that aggregate image and radiomic information.
arXiv Detail & Related papers (2022-07-10T06:32:56Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Automated Knee X-ray Report Generation [12.732469371097347]
We propose to take advantage of past radiological exams and formulate a framework capable of learning the correspondence between the images and reports.
We demonstrate how aggregating the image features of individual exams and using them as conditional inputs when training a language generation model results in auto-generated exam reports.
arXiv Detail & Related papers (2021-05-22T11:59:42Z) - Effect of Radiology Report Labeler Quality on Deep Learning Models for
Chest X-Ray Interpretation [6.360030720258042]
This study investigates the impact of improvements in radiology report labeling on the performance of chest X-ray classification models.
We compare the CheXpert, CheXbert, and VisualCheXbert labelers on the task of extracting accurate chest X-ray image labels from radiology reports.
We show that an image classification model trained on labels from the VisualCheXbert labeler outperforms image classification models trained on labels from the CheXpert and CheXbert labelers.
arXiv Detail & Related papers (2021-04-01T22:37:29Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Paying Per-label Attention for Multi-label Extraction from Radiology
Reports [1.9601378412924186]
We tackle the automated extraction of structured labels from head CT reports for imaging of suspected stroke patients.
We propose a set of 31 labels which correspond to radiographic findings and clinical impressions related to neurological abnormalities.
We are able to robustly extract many labels with a single model, classified according to the radiologist's reporting.
arXiv Detail & Related papers (2020-07-31T16:11:09Z) - Automated Radiological Report Generation For Chest X-Rays With
Weakly-Supervised End-to-End Deep Learning [17.315387269810426]
We built a database containing more than 12,000 CXR scans and radiological reports.
We developed a model based on deep convolutional neural network and recurrent network with attention mechanism.
The model provides automated recognition of given scans and generation of reports.
arXiv Detail & Related papers (2020-06-18T08:12:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.