Training-Aware Risk Control for Intensity Modulated Radiation Therapies Quality Assurance with Conformal Prediction
- URL: http://arxiv.org/abs/2501.08963v1
- Date: Wed, 15 Jan 2025 17:19:51 GMT
- Title: Training-Aware Risk Control for Intensity Modulated Radiation Therapies Quality Assurance with Conformal Prediction
- Authors: Kevin He, David Adam, Sarah Han-Oh, Anqi Liu,
- Abstract summary: Measurement quality assurance practices play a key role in the safe use of Intensity Modulated Radiation Therapies (IMRT) for cancer treatment.<n>These practices have reduced measurement-based IMRT QA failure below 1%.<n>We propose a new training-aware conformal risk control method by combining the benefit of conformal risk control and conformal training.
- Score: 7.227232362460348
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Measurement quality assurance (QA) practices play a key role in the safe use of Intensity Modulated Radiation Therapies (IMRT) for cancer treatment. These practices have reduced measurement-based IMRT QA failure below 1%. However, these practices are time and labor intensive which can lead to delays in patient care. In this study, we examine how conformal prediction methodologies can be used to robustly triage plans. We propose a new training-aware conformal risk control method by combining the benefit of conformal risk control and conformal training. We incorporate the decision making thresholds based on the gamma passing rate, along with the risk functions used in clinical evaluation, into the design of the risk control framework. Our method achieves high sensitivity and specificity and significantly reduces the number of plans needing measurement without generating a huge confidence interval. Our results demonstrate the validity and applicability of conformal prediction methods for improving efficiency and reducing the workload of the IMRT QA process.
Related papers
- Correctness Coverage Evaluation for Medical Multiple-Choice Question Answering Based on the Enhanced Conformal Prediction Framework [2.9599960287815144]
Large language models (LLMs) are increasingly adopted in medical question-answering (QA) scenarios.<n>LLMs can generate hallucinations and nonfactual information, undermining their trustworthiness in high-stakes medical tasks.<n>This paper proposes an enhanced Conformal Prediction framework for medical multiple-choice question-answering tasks.
arXiv Detail & Related papers (2025-03-07T15:22:10Z) - Conformal Risk Control for Semantic Uncertainty Quantification in Computed Tomography [8.992691662614206]
We present a conformal risk control (CRC) procedure for organ-dependent uncertainty estimation.
We make this procedure semantically adaptive to each patient's anatomy and positioning of organs.
Our method, sem-CRC, provides tighter uncertainty intervals with valid coverage on real-world computed tomography (CT) data.
arXiv Detail & Related papers (2025-02-28T19:27:07Z) - Distribution-Free Uncertainty Quantification in Mechanical Ventilation Treatment: A Conformal Deep Q-Learning Framework [2.5070297884580874]
This study introduces ConformalDQN, a distribution-free conformal deep Q-learning approach for optimizing mechanical ventilation in intensive care units.<n>We trained and evaluated our model using ICU patient records from the MIMIC-IV database.
arXiv Detail & Related papers (2024-12-17T06:55:20Z) - Towards Integrating Epistemic Uncertainty Estimation into the Radiotherapy Workflow [40.07325268305058]
The precision of contouring target structures and organs-at-risk (OAR) in radiotherapy planning is crucial for ensuring treatment efficacy and patient safety.
Recent advancements in deep learning (DL) have significantly improved OAR contouring performance.
However, the reliability of these models, especially in the presence of out-of-distribution (OOD) scenarios, remains a concern in clinical settings.
arXiv Detail & Related papers (2024-09-27T10:55:58Z) - Improving Deep Learning Model Calibration for Cardiac Applications using Deterministic Uncertainty Networks and Uncertainty-aware Training [2.0006125576503617]
We evaluate the impact on accuracy and calibration of two types of approach that aim to improve deep learning (DL) classification model calibration.
Specifically, we test the performance of three DUMs and two uncertainty-aware training approaches as well as their combinations.
Our results indicate that both DUMs and uncertainty-aware training can improve both accuracy and calibration in both of our applications.
arXiv Detail & Related papers (2024-05-10T14:07:58Z) - Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation [54.61816424792866]
We introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation.
We design two innovative meta-algorithms: textttRS-DisRL-M, a model-based strategy for model-based function approximation, and textttRS-DisRL-V, a model-free approach for general value function approximation.
arXiv Detail & Related papers (2024-02-28T08:43:18Z) - Uncertainty Aware Training to Improve Deep Learning Model Calibration
for Classification of Cardiac MR Images [3.9402047771122812]
Quantifying uncertainty of predictions has been identified as one way to develop more trustworthy AI models.
We evaluate three novel uncertainty-aware training strategies comparing against two state-of-the-art approaches.
arXiv Detail & Related papers (2023-08-29T09:19:49Z) - Is Risk-Sensitive Reinforcement Learning Properly Resolved? [32.42976780682353]
We propose a novel algorithm, namely Trajectory Q-Learning (TQL), for RSRL problems with provable convergence to the optimal policy.
Based on our new learning architecture, we are free to introduce a general and practical implementation for different risk measures to learn disparate risk-sensitive policies.
arXiv Detail & Related papers (2023-07-02T11:47:21Z) - Pruning the Way to Reliable Policies: A Multi-Objective Deep Q-Learning Approach to Critical Care [46.2482873419289]
We introduce a deep Q-learning approach to obtain more reliable critical care policies.
We evaluate our method in off-policy and offline settings using simulated environments and real health records from intensive care units.
arXiv Detail & Related papers (2023-06-13T18:02:57Z) - U-PASS: an Uncertainty-guided deep learning Pipeline for Automated Sleep
Staging [61.6346401960268]
We propose a machine learning pipeline called U-PASS tailored for clinical applications that incorporates uncertainty estimation at every stage of the process.
We apply our uncertainty-guided deep learning pipeline to the challenging problem of sleep staging and demonstrate that it systematically improves performance at every stage.
arXiv Detail & Related papers (2023-06-07T08:27:36Z) - Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization [63.93275508300137]
We introduce a novel risk-aware Counterfactual Learning To Rank method with theoretical guarantees for safe deployment.
Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little data is available.
arXiv Detail & Related papers (2023-04-26T15:54:23Z) - Assessment of Treatment Effect Estimators for Heavy-Tailed Data [70.72363097550483]
A central obstacle in the objective assessment of treatment effect (TE) estimators in randomized control trials (RCTs) is the lack of ground truth (or validation set) to test their performance.
We provide a novel cross-validation-like methodology to address this challenge.
We evaluate our methodology across 709 RCTs implemented in the Amazon supply chain.
arXiv Detail & Related papers (2021-12-14T17:53:01Z) - Incorporating Expert Guidance in Epidemic Forecasting [79.91855362871496]
We propose a new approach leveraging the Seldonian optimization framework from AI safety.
We study two types of guidance: smoothness and regional consistency of errors.
We show that by its successful incorporation, we are able to not only bound the probability of undesirable behavior to happen, but also to reduce RMSE on test data by up to 17%.
arXiv Detail & Related papers (2020-12-24T06:21:53Z) - Interpretable Off-Policy Evaluation in Reinforcement Learning by
Highlighting Influential Transitions [48.91284724066349]
Off-policy evaluation in reinforcement learning offers the chance of using observational data to improve future outcomes in domains such as healthcare and education.
Traditional measures such as confidence intervals may be insufficient due to noise, limited data and confounding.
We develop a method that could serve as a hybrid human-AI system, to enable human experts to analyze the validity of policy evaluation estimates.
arXiv Detail & Related papers (2020-02-10T00:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.