CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation
- URL: http://arxiv.org/abs/2401.12208v1
- Date: Mon, 22 Jan 2024 18:51:07 GMT
- Title: CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation
- Authors: Zhihong Chen, Maya Varma, Jean-Benoit Delbrouck, Magdalini Paschali,
Louis Blankemeier, Dave Van Veen, Jeya Maria Jose Valanarasu, Alaa Youssef,
Joseph Paul Cohen, Eduardo Pontes Reis, Emily B. Tsai, Andrew Johnston,
Cameron Olsen, Tanishq Mathew Abraham, Sergios Gatidis, Akshay S. Chaudhari,
Curtis Langlotz
- Abstract summary: Chest X-rays (CXRs) are the most frequently performed imaging test in clinical practice.
Recent advances in the development of vision-language foundation models (FMs) give rise to the possibility of performing automated CXR interpretation.
- Score: 21.31741755127183
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chest X-rays (CXRs) are the most frequently performed imaging test in
clinical practice. Recent advances in the development of vision-language
foundation models (FMs) give rise to the possibility of performing automated
CXR interpretation, which can assist physicians with clinical decision-making
and improve patient outcomes. However, developing FMs that can accurately
interpret CXRs is challenging due to the (1) limited availability of
large-scale vision-language datasets in the medical image domain, (2) lack of
vision and language encoders that can capture the complexities of medical data,
and (3) absence of evaluation frameworks for benchmarking the abilities of FMs
on CXR interpretation. In this work, we address these challenges by first
introducing \emph{CheXinstruct} - a large-scale instruction-tuning dataset
curated from 28 publicly-available datasets. We then present \emph{CheXagent} -
an instruction-tuned FM capable of analyzing and summarizing CXRs. To build
CheXagent, we design a clinical large language model (LLM) for parsing
radiology reports, a vision encoder for representing CXR images, and a network
to bridge the vision and language modalities. Finally, we introduce
\emph{CheXbench} - a novel benchmark designed to systematically evaluate FMs
across 8 clinically-relevant CXR interpretation tasks. Extensive quantitative
evaluations and qualitative reviews with five expert radiologists demonstrate
that CheXagent outperforms previously-developed general- and medical-domain FMs
on CheXbench tasks. Furthermore, in an effort to improve model transparency, we
perform a fairness evaluation across factors of sex, race and age to highlight
potential performance disparities. Our project is at
\url{https://stanford-aimi.github.io/chexagent.html}.
Related papers
- Chest X-ray Foundation Model with Global and Local Representations Integration [13.736829173377355]
CheXFound is a vision foundation model that learns robust CXR representations and generalizes effectively across a wide range of downstream tasks.
We pretrain CheXFound on a curated CXR-1M dataset, comprising over one million unique CXRs from publicly available sources.
Our experimental results show that CheXFound outperforms state-of-the-art models in classifying 40 disease findings across different prevalence levels.
arXiv Detail & Related papers (2025-02-07T18:16:15Z) - Can Modern LLMs Act as Agent Cores in Radiology Environments? [54.36730060680139]
Large language models (LLMs) offer enhanced accuracy and interpretability across various domains.
This paper aims to investigate the pre-requisite question for building concrete radiology agents.
We present RadABench-Data, a comprehensive synthetic evaluation dataset for LLM-based agents.
Second, we propose RadABench-EvalPlat, a novel evaluation platform for agents featuring a prompt-driven workflow.
arXiv Detail & Related papers (2024-12-12T18:20:16Z) - ReXrank: A Public Leaderboard for AI-Powered Radiology Report Generation [16.687723916901728]
We present ReXrank, a leaderboard and challenge for assessing AI-powered radiology report generation.
Our framework incorporates ReXGradient, the largest test dataset consisting of 10,000 studies.
By providing this standardized evaluation framework, ReXrank enables meaningful comparisons of model performance.
arXiv Detail & Related papers (2024-11-22T18:40:02Z) - ChatRadio-Valuer: A Chat Large Language Model for Generalizable
Radiology Report Generation Based on Multi-institution and Multi-system Data [115.0747462486285]
ChatRadio-Valuer is a tailored model for automatic radiology report generation that learns generalizable representations.
The clinical dataset utilized in this study encompasses a remarkable total of textbf332,673 observations.
ChatRadio-Valuer consistently outperforms state-of-the-art models, especially ChatGPT (GPT-3.5-Turbo) and GPT-4 et al.
arXiv Detail & Related papers (2023-10-08T17:23:17Z) - Longitudinal Data and a Semantic Similarity Reward for Chest X-Ray Report Generation [7.586632627817609]
Radiologists face high burnout rates, partly due to the increasing volume of Chest X-rays (CXRs) requiring interpretation and reporting.
Our proposed CXR report generator integrates elements of the workflow and introduces a novel reward for reinforcement learning.
Results from our study demonstrate that the proposed model generates reports that are more aligned with radiologists' reports than state-of-the-art models.
arXiv Detail & Related papers (2023-07-19T05:41:14Z) - Revisiting Computer-Aided Tuberculosis Diagnosis [56.80999479735375]
Tuberculosis (TB) is a major global health threat, causing millions of deaths annually.
Computer-aided tuberculosis diagnosis (CTD) using deep learning has shown promise, but progress is hindered by limited training data.
We establish a large-scale dataset, namely the Tuberculosis X-ray (TBX11K) dataset, which contains 11,200 chest X-ray (CXR) images with corresponding bounding box annotations for TB areas.
This dataset enables the training of sophisticated detectors for high-quality CTD.
arXiv Detail & Related papers (2023-07-06T08:27:48Z) - Act Like a Radiologist: Radiology Report Generation across Anatomical Regions [50.13206214694885]
X-RGen is a radiologist-minded report generation framework across six anatomical regions.
In X-RGen, we seek to mimic the behaviour of human radiologists, breaking them down into four principal phases.
We enhance the recognition capacity of the image encoder by analysing images and reports across various regions.
arXiv Detail & Related papers (2023-05-26T07:12:35Z) - COVID-Net CXR-2: An Enhanced Deep Convolutional Neural Network Design
for Detection of COVID-19 Cases from Chest X-ray Images [58.35627258364233]
Use of chest X-ray (CXR) imaging as a complimentary screening strategy to RT-PCR testing continues to grow.
We introduce COVID-Net CXR-2, an enhanced deep convolutional neural network design for COVID-19 detection from CXR images.
benchmark dataset composed of 19,203 CXR images from a multinational cohort of 16,656 patients from at least 51 countries.
arXiv Detail & Related papers (2021-05-14T04:29:21Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Automated Radiological Report Generation For Chest X-Rays With
Weakly-Supervised End-to-End Deep Learning [17.315387269810426]
We built a database containing more than 12,000 CXR scans and radiological reports.
We developed a model based on deep convolutional neural network and recurrent network with attention mechanism.
The model provides automated recognition of given scans and generation of reports.
arXiv Detail & Related papers (2020-06-18T08:12:54Z) - Interpreting Chest X-rays via CNNs that Exploit Hierarchical Disease
Dependencies and Uncertainty Labels [0.33598755777055367]
We present a framework based on deep convolutional neural networks (CNNs) for diagnos-ing the presence of 14 common thoracic diseases and observations.
The proposed method was also evaluated on an inde-pendent test set of the CheXpert competition, containing 500 CXR studies annotated by apanel of 5 experienced radiologists.
arXiv Detail & Related papers (2020-05-25T11:07:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.