A Multisite, Report-Based, Centralized Infrastructure for Feedback and
Monitoring of Radiology AI/ML Development and Clinical Deployment
- URL: http://arxiv.org/abs/2008.13781v1
- Date: Mon, 31 Aug 2020 17:59:04 GMT
- Title: A Multisite, Report-Based, Centralized Infrastructure for Feedback and
Monitoring of Radiology AI/ML Development and Clinical Deployment
- Authors: Menashe Benjamin, Guy Engelhard, Alex Aisen, Yinon Aradi, Elad
Benjamin
- Abstract summary: An interactive radiology reporting approach integrates image viewing, dictation, natural language processing (NLP) and creation of hyperlinks between image findings and the report.
These images and labels can be captured and centralized in a cloud-based system.
The method addresses proposed regulatory requirements for post-marketing surveillance and external data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An infrastructure for multisite, geographically-distributed creation and
collection of diverse, high-quality, curated and labeled radiology image data
is crucial for the successful automated development, deployment, monitoring and
continuous improvement of Artificial Intelligence (AI)/Machine Learning (ML)
solutions in the real world. An interactive radiology reporting approach that
integrates image viewing, dictation, natural language processing (NLP) and
creation of hyperlinks between image findings and the report, provides
localized labels during routine interpretation. These images and labels can be
captured and centralized in a cloud-based system. This method provides a
practical and efficient mechanism with which to monitor algorithm performance.
It also supplies feedback for iterative development and quality improvement of
new and existing algorithmic models. Both feedback and monitoring are achieved
without burdening the radiologist. The method addresses proposed regulatory
requirements for post-marketing surveillance and external data. Comprehensive
multi-site data collection assists in reducing bias. Resource requirements are
greatly reduced compared to dedicated retrospective expert labeling.
Related papers
- Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation [3.7274206780843477]
We introduce a robust and versatile framework that combines AI and crowdsourcing to improve the quality and quantity of medical image datasets.
Our approach utilise a user-friendly online platform that enables a diverse group of crowd annotators to label medical images efficiently.
We employ pix2pixGAN, a generative AI model, to expand the training dataset with synthetic images that capture realistic morphological features.
arXiv Detail & Related papers (2024-09-04T21:22:54Z) - Diffuse-UDA: Addressing Unsupervised Domain Adaptation in Medical Image Segmentation with Appearance and Structure Aligned Diffusion Models [31.006056670998852]
The scarcity and complexity of voxel-level annotations in 3D medical imaging present significant challenges.
This disparity affects the fairness of artificial intelligence algorithms in healthcare.
We introduce Diffuse-UDA, a novel method leveraging diffusion models to tackle Unsupervised Domain Adaptation (UDA) in medical image segmentation.
arXiv Detail & Related papers (2024-08-12T08:21:04Z) - Ultrasound Report Generation with Cross-Modality Feature Alignment via Unsupervised Guidance [37.37279393074854]
We propose a novel framework for automatic ultrasound report generation, leveraging a combination of unsupervised and supervised learning methods.
Our framework incorporates unsupervised learning methods to extract potential knowledge from ultrasound text reports.
We design a global semantic comparison mechanism to enhance the performance of generating more comprehensive and accurate medical reports.
arXiv Detail & Related papers (2024-06-02T07:16:58Z) - Multi-modality Regional Alignment Network for Covid X-Ray Survival Prediction and Report Generation [36.343753593390254]
This study proposes Multi-modality Regional Alignment Network (MRANet), an explainable model for radiology report generation and survival prediction.
MRANet visually grounds region-specific descriptions, providing robust anatomical regions with a completion strategy.
A cross LLMs alignment is employed to enhance the image-to-text transfer process, resulting in sentences rich with clinical detail and improved explainability for radiologist.
arXiv Detail & Related papers (2024-05-23T02:41:08Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.