Field Study in Deploying Restless Multi-Armed Bandits: Assisting
Non-Profits in Improving Maternal and Child Health
- URL: http://arxiv.org/abs/2109.08075v1
- Date: Thu, 16 Sep 2021 16:04:48 GMT
- Title: Field Study in Deploying Restless Multi-Armed Bandits: Assisting
Non-Profits in Improving Maternal and Child Health
- Authors: Aditya Mate, Lovish Madaan, Aparna Taneja, Neha Madhiwalla, Shresth
Verma, Gargi Singh, Aparna Hegde, Pradeep Varakantham, Milind Tambe
- Abstract summary: Cell phones have enabled non-profits to deliver critical health information to their beneficiaries in a timely manner.
A key challenge in such information delivery programs is that a significant fraction of beneficiaries drop out of the program.
We developed a Restless Multi-Armed Bandits system to help non-profits place crucial service calls for live interaction with beneficiaries to prevent such engagement drops.
- Score: 28.43878945119807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread availability of cell phones has enabled non-profits to deliver
critical health information to their beneficiaries in a timely manner. This
paper describes our work to assist non-profits that employ automated messaging
programs to deliver timely preventive care information to beneficiaries (new
and expecting mothers) during pregnancy and after delivery. Unfortunately, a
key challenge in such information delivery programs is that a significant
fraction of beneficiaries drop out of the program. Yet, non-profits often have
limited health-worker resources (time) to place crucial service calls for live
interaction with beneficiaries to prevent such engagement drops. To assist
non-profits in optimizing this limited resource, we developed a Restless
Multi-Armed Bandits (RMABs) system. One key technical contribution in this
system is a novel clustering method of offline historical data to infer unknown
RMAB parameters. Our second major contribution is evaluation of our RMAB system
in collaboration with an NGO, via a real-world service quality improvement
study. The study compared strategies for optimizing service calls to 23003
participants over a period of 7 weeks to reduce engagement drops. We show that
the RMAB group provides statistically significant improvement over other
comparison groups, reducing ~ 30% engagement drops. To the best of our
knowledge, this is the first study demonstrating the utility of RMABs in real
world public health settings. We are transitioning our RMAB system to the NGO
for real-world use.
Related papers
- Bayesian Collaborative Bandits with Thompson Sampling for Improved Outreach in Maternal Health Program [36.10003434625494]
Mobile health (mHealth) programs face a critical challenge in optimizing the timing of automated health information calls to beneficiaries.
We propose a principled approach using Thompson Sampling for this collaborative bandit problem.
We demonstrate significant improvements over state-of-the-art baselines on a real-world dataset from the world's largest maternal mHealth program.
arXiv Detail & Related papers (2024-10-28T18:08:18Z) - Improving Health Information Access in the World's Largest Maternal Mobile Health Program via Bandit Algorithms [24.4450506603579]
This paper focuses on Kilkari, the world's largest mHealth program for maternal and child care.
We present a system called CHAHAK that aims to reduce automated dropouts as well as boost engagement with the program.
arXiv Detail & Related papers (2024-05-14T07:21:49Z) - Efficient Public Health Intervention Planning Using Decomposition-Based
Decision-Focused Learning [33.14258196945301]
We show how to exploit the structure of Restless Multi-Armed Bandits (RMABs) to speed up intervention planning.
We use real-world data from an Indian NGO, ARMMAN, to show that our approach is up to two orders of magnitude faster than the state-of-the-art approach.
arXiv Detail & Related papers (2024-03-08T21:31:00Z) - Retrieval Augmented Thought Process for Private Data Handling in Healthcare [53.89406286212502]
We introduce the Retrieval-Augmented Thought Process (RATP)
RATP formulates the thought generation of Large Language Models (LLMs)
On a private dataset of electronic medical records, RATP achieves 35% additional accuracy compared to in-context retrieval-augmented generation for the question-answering task.
arXiv Detail & Related papers (2024-02-12T17:17:50Z) - Analyzing and Predicting Low-Listenership Trends in a Large-Scale Mobile
Health Program: A Preliminary Investigation [25.831299045335125]
Kilkari is one of the world's largest mobile health programs which delivers time sensitive audio-messages to pregnant women and new mothers.
We provide an initial analysis of the trajectories of beneficiaries' interaction with the mHealth program.
We examine elements of the program that can be potentially enhanced to boost its success.
arXiv Detail & Related papers (2023-11-13T08:11:09Z) - Deep Reinforcement Learning for Efficient and Fair Allocation of Health Care Resources [47.57108369791273]
Scarcity of health care resources could result in the unavoidable consequence of rationing.
There is no universally accepted standard for health care resource allocation protocols.
We propose a transformer-based deep Q-network to integrate the disease progression of individual patients and the interaction effects among patients.
arXiv Detail & Related papers (2023-09-15T17:28:06Z) - Imagining new futures beyond predictive systems in child welfare: A
qualitative study with impacted stakeholders [89.6319385008397]
We conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system.
We found that participants worried current PRMs perpetuate or exacerbate existing problems in child welfare.
Participants suggested new ways to use data and data-driven tools to better support impacted communities.
arXiv Detail & Related papers (2022-05-18T13:49:55Z) - Contingency-Aware Influence Maximization: A Reinforcement Learning
Approach [52.109536198330126]
influence (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence.
In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, called contingency-aware IM.
Despite the initial success, a major practical obstacle in promoting the solutions to more communities is the tremendous runtime of the greedy algorithms.
arXiv Detail & Related papers (2021-06-13T16:42:22Z) - Learn to Intervene: An Adaptive Learning Policy for Restless Bandits in
Application to Preventive Healthcare [39.41918282603752]
We propose a Whittle index based Q-Learning mechanism for restless multi-armed bandit (RMAB) problems.
Our method improves over existing learning-based methods for RMABs on multiple benchmarks from literature and also on the maternal healthcare dataset.
arXiv Detail & Related papers (2021-05-17T15:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.