RAISE -- Radiology AI Safety, an End-to-end lifecycle approach
- URL: http://arxiv.org/abs/2311.14570v1
- Date: Fri, 24 Nov 2023 15:59:14 GMT
- Title: RAISE -- Radiology AI Safety, an End-to-end lifecycle approach
- Authors: M. Jorge Cardoso, Julia Moosbauer, Tessa S. Cook, B. Selnur Erdal,
Brad Genereaux, Vikash Gupta, Bennett A. Landman, Tiarna Lee, Parashkev
Nachev, Elanchezhian Somasundaram, Ronald M. Summers, Khaled Younis,
Sebastien Ourselin, Franz MJ Pfister
- Abstract summary: The integration of AI into radiology introduces opportunities for improved clinical care provision and efficiency.
The focus should be on ensuring models meet the highest standards of safety, effectiveness and efficacy.
The roadmap presented herein aims to expedite the achievement of deployable, reliable, and safe AI in radiology.
- Score: 5.829180249228172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of AI into radiology introduces opportunities for improved
clinical care provision and efficiency but it demands a meticulous approach to
mitigate potential risks as with any other new technology. Beginning with
rigorous pre-deployment evaluation and validation, the focus should be on
ensuring models meet the highest standards of safety, effectiveness and
efficacy for their intended applications. Input and output guardrails
implemented during production usage act as an additional layer of protection,
identifying and addressing individual failures as they occur. Continuous
post-deployment monitoring allows for tracking population-level performance
(data drift), fairness, and value delivery over time. Scheduling reviews of
post-deployment model performance and educating radiologists about new
algorithmic-driven findings is critical for AI to be effective in clinical
practice. Recognizing that no single AI solution can provide absolute assurance
even when limited to its intended use, the synergistic application of quality
assurance at multiple levels - regulatory, clinical, technical, and ethical -
is emphasized. Collaborative efforts between stakeholders spanning healthcare
systems, industry, academia, and government are imperative to address the
multifaceted challenges involved. Trust in AI is an earned privilege,
contingent on a broad set of goals, among them transparently demonstrating that
the AI adheres to the same rigorous safety, effectiveness and efficacy
standards as other established medical technologies. By doing so, developers
can instil confidence among providers and patients alike, enabling the
responsible scaling of AI and the realization of its potential benefits. The
roadmap presented herein aims to expedite the achievement of deployable,
reliable, and safe AI in radiology.
Related papers
- TrialBench: Multi-Modal Artificial Intelligence-Ready Clinical Trial Datasets [57.067409211231244]
This paper presents meticulously curated AIready datasets covering multi-modal data (e.g., drug molecule, disease code, text, categorical/numerical features) and 8 crucial prediction challenges in clinical trial design.
We provide basic validation methods for each task to ensure the datasets' usability and reliability.
We anticipate that the availability of such open-access datasets will catalyze the development of advanced AI approaches for clinical trial design.
arXiv Detail & Related papers (2024-06-30T09:13:10Z) - Fair by design: A sociotechnical approach to justifying the fairness of AI-enabled systems across the lifecycle [0.8164978442203773]
Fairness is one of the most commonly identified ethical principles in existing AI guidelines.
The development of fair AI-enabled systems is required by new and emerging AI regulation.
arXiv Detail & Related papers (2024-06-13T12:03:29Z) - Integrating ChatGPT into Secure Hospital Networks: A Case Study on
Improving Radiology Report Analysis [1.3624495460189863]
This study demonstrates the first in-hospital adaptation of a cloud-based AI, similar to ChatGPT, into a secure model for analyzing radiology reports.
By employing a unique sentence-level knowledge distillation method through contrastive learning, we achieve over 95% accuracy in detecting anomalies.
arXiv Detail & Related papers (2024-02-14T18:02:24Z) - Functional requirements to mitigate the Risk of Harm to Patients from
Artificial Intelligence in Healthcare [0.0]
This study proposes 14 functional requirements that AI systems may implement to reduce the risks associated with their medical purpose.
Our intention here is to provide specific high-level specifications of technical solutions to ensure continuous good performance and use of AI systems to benefit patients in compliance with the future EU regulatory framework.
arXiv Detail & Related papers (2023-09-19T08:37:22Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Pruning the Way to Reliable Policies: A Multi-Objective Deep Q-Learning
Approach to Critical Care [68.8204255655161]
We introduce a deep Q-learning approach able to obtain more reliable critical care policies.
We achieve this by first pruning the action set based on all available rewards, and second training a final model based on the sparse main reward but with a restricted action set.
arXiv Detail & Related papers (2023-06-13T18:02:57Z) - U-PASS: an Uncertainty-guided deep learning Pipeline for Automated Sleep
Staging [61.6346401960268]
We propose a machine learning pipeline called U-PASS tailored for clinical applications that incorporates uncertainty estimation at every stage of the process.
We apply our uncertainty-guided deep learning pipeline to the challenging problem of sleep staging and demonstrate that it systematically improves performance at every stage.
arXiv Detail & Related papers (2023-06-07T08:27:36Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging [6.099257839022179]
FUTURE-AI framework comprises guiding principles for increased trust, safety, and adoption for AI in healthcare.
We transform the general FUTURE-AI healthcare principles to a concise and specific AI implementation guide tailored to the needs of the medical imaging community.
arXiv Detail & Related papers (2021-09-20T16:22:49Z) - Towards a framework for evaluating the safety, acceptability and
efficacy of AI systems for health: an initial synthesis [0.2936007114555107]
We aim to set out a minimally viable framework for evaluating the safety, acceptability and efficacy of AI systems for healthcare.
We do this by conducting a systematic search across Scopus, PubMed and Google Scholar.
The result is a framework to guide AI system developers, policymakers, and regulators through a sufficient evaluation of an AI system designed for use in healthcare.
arXiv Detail & Related papers (2021-04-14T15:00:39Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.