Oralytics Reinforcement Learning Algorithm
- URL: http://arxiv.org/abs/2406.13127v2
- Date: Thu, 12 Sep 2024 19:16:10 GMT
- Title: Oralytics Reinforcement Learning Algorithm
- Authors: Anna L. Trella, Kelly W. Zhang, Stephanie M. Carpenter, David Elashoff, Zara M. Greer, Inbal Nahum-Shani, Dennis Ruenger, Vivek Shetty, Susan A. Murphy,
- Abstract summary: Dental disease is one of the most common chronic diseases in the United States.
We have developed Oralytics, an online, reinforcement learning (RL) algorithm that optimize the delivery of personalized intervention prompts to improve oral self-care (OSCB)
The finalized RL algorithm was deployed in the Oralytics clinical trial, conducted from fall 2023 to summer 2024.
- Score: 5.54328512723076
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Dental disease is still one of the most common chronic diseases in the United States. While dental disease is preventable through healthy oral self-care behaviors (OSCB), this basic behavior is not consistently practiced. We have developed Oralytics, an online, reinforcement learning (RL) algorithm that optimizes the delivery of personalized intervention prompts to improve OSCB. In this paper, we offer a full overview of algorithm design decisions made using prior data, domain expertise, and experiments in a simulation test bed. The finalized RL algorithm was deployed in the Oralytics clinical trial, conducted from fall 2023 to summer 2024.
Related papers
- A Deployed Online Reinforcement Learning Algorithm In An Oral Health Clinical Trial [20.944037982124037]
Dental disease is a chronic condition associated with substantial financial burden, personal suffering, and increased risk of systemic diseases.
Despite widespread recommendations for twice-daily tooth brushing, adherence to recommended oral self-care behaviors remains sub-optimal due to factors such as forgetfulness and disengagement.
We developed Oralytics, a mHealth intervention system designed to complement clinician-delivered preventative care for marginalized individuals at risk for dental disease.
arXiv Detail & Related papers (2024-09-03T17:16:01Z) - Monitoring Fidelity of Online Reinforcement Learning Algorithms in Clinical Trials [20.944037982124037]
This paper proposes algorithm fidelity as a critical requirement for deploying online RL algorithms in clinical trials.
We present a framework for pre-deployment planning and real-time monitoring to help algorithm developers and clinical researchers ensure algorithm fidelity.
arXiv Detail & Related papers (2024-02-26T20:19:14Z) - Measurement Scheduling for ICU Patients with Offline Reinforcement
Learning [16.07235754244993]
Studies show that 20-40% of lab tests ordered in the ICU are redundant and could be eliminated without compromising patient safety.
Prior work has leveraged offline reinforcement learning (Offline-RL) to find optimal policies for ordering lab tests based on patient information.
New ICU patient datasets have since been released, and various advancements have been made in Offline-RL methods.
arXiv Detail & Related papers (2024-02-12T00:22:47Z) - TREEMENT: Interpretable Patient-Trial Matching via Personalized Dynamic
Tree-Based Memory Network [54.332862955411656]
Clinical trials are critical for drug development but often suffer from expensive and inefficient patient recruitment.
In recent years, machine learning models have been proposed for speeding up patient recruitment via automatically matching patients with clinical trials.
We introduce a dynamic tree-based memory network model named TREEMENT to provide accurate and interpretable patient trial matching.
arXiv Detail & Related papers (2023-07-19T12:35:09Z) - Did we personalize? Assessing personalization by an online reinforcement
learning algorithm using resampling [9.745543921550748]
Reinforcement learning (RL) can be used to personalize sequences of treatments in digital health to support users in adopting healthier behaviors.
Online RL is a promising data-driven approach for this problem as it learns based on each user's historical responses.
We assess whether the RL algorithm should be included in an optimized'' intervention for real-world deployment.
arXiv Detail & Related papers (2023-04-11T17:20:37Z) - Automated Fidelity Assessment for Strategy Training in Inpatient
Rehabilitation using Natural Language Processing [53.096237570992294]
Strategy training is a rehabilitation approach that teaches skills to reduce disability among those with cognitive impairments following a stroke.
Standardized fidelity assessment is used to measure adherence to treatment principles.
We developed a rule-based NLP algorithm, a long-short term memory (LSTM) model, and a bidirectional encoder representation from transformers (BERT) model for this task.
arXiv Detail & Related papers (2022-09-14T15:33:30Z) - Reward Design For An Online Reinforcement Learning Algorithm Supporting
Oral Self-Care [24.283342018185028]
Dental disease is one of the most common chronic diseases despite being largely preventable.
We develop an online reinforcement learning (RL) algorithm for use in optimizing the delivery of mobile-based prompts to encourage oral hygiene behaviors.
The RL algorithm discussed in this paper will be deployed in Oralytics, an oral self-care app that provides behavioral strategies to boost patient engagement in oral hygiene practices.
arXiv Detail & Related papers (2022-08-15T18:47:09Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - Resource Planning for Hospitals Under Special Consideration of the
COVID-19 Pandemic: Optimization and Sensitivity Analysis [87.31348761201716]
Crises like the COVID-19 pandemic pose a serious challenge to health-care institutions.
BaBSim.Hospital is a tool for capacity planning based on discrete event simulation.
We aim to investigate and optimize these parameters to improve BaBSim.Hospital.
arXiv Detail & Related papers (2021-05-16T12:38:35Z) - DTR Bandit: Learning to Make Response-Adaptive Decisions With Low Regret [59.81290762273153]
Dynamic treatment regimes (DTRs) are personalized, adaptive, multi-stage treatment plans that adapt treatment decisions to an individual's initial features and to intermediate outcomes and features at each subsequent stage.
We propose a novel algorithm that, by carefully balancing exploration and exploitation, is guaranteed to achieve rate-optimal regret when the transition and reward models are linear.
arXiv Detail & Related papers (2020-05-06T13:03:42Z) - CAT: Customized Adversarial Training for Improved Robustness [142.3480998034692]
We propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial training.
We show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods through extensive experiments.
arXiv Detail & Related papers (2020-02-17T06:13:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.