Offline Contextual Multi-armed Bandits for Mobile Health Interventions:
A Case Study on Emotion Regulation
- URL: http://arxiv.org/abs/2008.09472v1
- Date: Fri, 21 Aug 2020 13:41:24 GMT
- Title: Offline Contextual Multi-armed Bandits for Mobile Health Interventions:
A Case Study on Emotion Regulation
- Authors: Mawulolo K. Ameko, Miranda L. Beltzer, Lihua Cai, Mehdi Boukhechba,
Bethany A. Teachman, Laura E. Barnes
- Abstract summary: We present the first development of a treatment recommender system for emotion regulation using real-world historical mobile digital data.
Our experimentation shows that the proposed doubly robust offline learning algorithms performed significantly better than baseline approaches.
- Score: 2.8680209551947473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Delivering treatment recommendations via pervasive electronic devices such as
mobile phones has the potential to be a viable and scalable treatment medium
for long-term health behavior management. But active experimentation of
treatment options can be time-consuming, expensive and altogether unethical in
some cases. There is a growing interest in methodological approaches that allow
an experimenter to learn and evaluate the usefulness of a new treatment
strategy before deployment. We present the first development of a treatment
recommender system for emotion regulation using real-world historical mobile
digital data from n = 114 high socially anxious participants to test the
usefulness of new emotion regulation strategies. We explore a number of offline
contextual bandits estimators for learning and propose a general framework for
learning algorithms. Our experimentation shows that the proposed doubly robust
offline learning algorithms performed significantly better than baseline
approaches, suggesting that this type of recommender algorithm could improve
emotion regulation. Given that emotion regulation is impaired across many
mental illnesses and such a recommender algorithm could be scaled up easily,
this approach holds potential to increase access to treatment for many people.
We also share some insights that allow us to translate contextual bandit models
to this complex real-world data, including which contextual features appear to
be most important for predicting emotion regulation strategy effectiveness.
Related papers
- Tutorial on Using Machine Learning and Deep Learning Models for Mental Illness Detection [0.036136619420474754]
This tutorial provides guidance to address common challenges in applying machine learning and deep learning methods for mental health detection on social media.
Real-world examples and step-by-step instructions demonstrate how to apply these techniques effectively.
By sharing these approaches, this tutorial aims to help researchers build more reliable and widely applicable models for mental health research.
arXiv Detail & Related papers (2025-02-03T06:43:12Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Using Adaptive Bandit Experiments to Increase and Investigate Engagement
in Mental Health [14.20153035241548]
This paper presents a software system that allows text-messaging intervention components to be adapted using bandit and other algorithms.
We evaluate the system by deploying a text-message-based DMH intervention to 1100 users, recruited through a large mental health non-profit organization.
arXiv Detail & Related papers (2023-10-13T22:59:56Z) - Taxonomy Adaptive Cross-Domain Adaptation in Medical Imaging via
Optimization Trajectory Distillation [73.83178465971552]
The success of automated medical image analysis depends on large-scale and expert-annotated training sets.
Unsupervised domain adaptation (UDA) has been raised as a promising approach to alleviate the burden of labeled data collection.
We propose optimization trajectory distillation, a unified approach to address the two technical challenges from a new perspective.
arXiv Detail & Related papers (2023-07-27T08:58:05Z) - Contextual Bandits with Budgeted Information Reveal [3.861395476387163]
Contextual bandit algorithms are commonly used in digital health to recommend personalized treatments.
To ensure the effectiveness of the treatments, patients are often requested to take actions that have no immediate benefit to them.
We introduce a novel optimization and learning algorithm to address this problem.
arXiv Detail & Related papers (2023-05-29T16:18:28Z) - Automated Fidelity Assessment for Strategy Training in Inpatient
Rehabilitation using Natural Language Processing [53.096237570992294]
Strategy training is a rehabilitation approach that teaches skills to reduce disability among those with cognitive impairments following a stroke.
Standardized fidelity assessment is used to measure adherence to treatment principles.
We developed a rule-based NLP algorithm, a long-short term memory (LSTM) model, and a bidirectional encoder representation from transformers (BERT) model for this task.
arXiv Detail & Related papers (2022-09-14T15:33:30Z) - Offline Policy Optimization with Eligible Actions [34.4530766779594]
offline policy optimization could have a large impact on many real-world decision-making problems.
Importance sampling and its variants are a commonly used type of estimator in offline policy evaluation.
We propose an algorithm to avoid this overfitting through a new per-state-neighborhood normalization constraint.
arXiv Detail & Related papers (2022-07-01T19:18:15Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
Open Problems [108.81683598693539]
offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines.
We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods.
arXiv Detail & Related papers (2020-05-04T17:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.