StudyMe: A New Mobile App for User-Centric N-of-1 Trials
- URL: http://arxiv.org/abs/2108.00320v1
- Date: Sat, 31 Jul 2021 20:43:36 GMT
- Title: StudyMe: A New Mobile App for User-Centric N-of-1 Trials
- Authors: Alexander M. Zenner, Erwin B\"ottinger, Stefan Konigorski
- Abstract summary: N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals.
We present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: N-of-1 trials are multi-crossover self-experiments that allow individuals to
systematically evaluate the effect of interventions on their personal health
goals. Although several tools for N-of-1 trials exist, none support non-experts
in conducting their own user-centric trials. In this study we present StudyMe,
an open-source mobile application that is freely available from
https://play.google.com/store/apps/details?id=health.studyu.me and offers users
flexibility and guidance in configuring every component of their trials. We
also present research that informed the development of StudyMe. Through an
initial survey with 272 participants, we learned that individuals are
interested in a variety of personal health aspects and have unique ideas on how
to improve them. In an iterative, user-centered development process with
intermediate user tests we developed StudyMe that also features an educational
part to communicate N-of-1 trial concepts. A final empirical evaluation of
StudyMe showed that all participants were able to create their own trials
successfully using StudyMe and the app achieved a very good usability rating.
Our findings suggest that StudyMe provides a significant step towards enabling
individuals to apply a systematic science-oriented approach to personalize
health-related interventions and behavior modifications in their everyday
lives.
Related papers
- Applying and Evaluating Large Language Models in Mental Health Care: A Scoping Review of Human-Assessed Generative Tasks [16.099253839889148]
Large language models (LLMs) are emerging as promising tools for mental health care, offering scalable support through their ability to generate human-like responses.
However, the effectiveness of these models in clinical settings remains unclear.
This scoping review focused on studies where these models were tested with human participants in real-world scenarios.
arXiv Detail & Related papers (2024-08-21T02:21:59Z) - TrialBench: Multi-Modal Artificial Intelligence-Ready Clinical Trial Datasets [57.067409211231244]
This paper presents meticulously curated AIready datasets covering multi-modal data (e.g., drug molecule, disease code, text, categorical/numerical features) and 8 crucial prediction challenges in clinical trial design.
We provide basic validation methods for each task to ensure the datasets' usability and reliability.
We anticipate that the availability of such open-access datasets will catalyze the development of advanced AI approaches for clinical trial design.
arXiv Detail & Related papers (2024-06-30T09:13:10Z) - Panacea: A foundation model for clinical trial search, summarization, design, and recruitment [29.099676641424384]
We propose a clinical trial foundation model named Panacea.
Panacea is designed to handle multiple tasks, including trial search, trial summarization, trial design, and patient-trial matching.
We also assemble a large-scale dataset, named TrialAlign, of 793,279 trial documents and 1,113,207 trial-related scientific papers.
arXiv Detail & Related papers (2024-06-25T21:29:25Z) - On (Mis)perceptions of testing effectiveness: an empirical study [1.8026347864255505]
This research aims to discover how well the perceptions of the defect detection effectiveness of different techniques match their real effectiveness in the absence of prior experience.
In the original study, we conduct a controlled experiment with students applying two testing techniques and a code review technique.
At the end of the experiment, they take a survey to find out which technique they perceive to be most effective.
The results of the replicated study confirm the findings of the original study and suggest that participants' perceptions might be based not on their opinions about complexity or preferences for techniques but on how well they think that they have applied the techniques.
arXiv Detail & Related papers (2024-02-11T14:50:01Z) - Designing and evaluating an online reinforcement learning agent for
physical exercise recommendations in N-of-1 trials [0.9865722130817715]
We present an innovative N-of-1 trial study design testing whether implementing a personalized intervention by an online reinforcement learning agent is feasible and effective.
The results show that, first, implementing a personalized intervention by an online reinforcement learning agent is feasible.
Second, such adaptive interventions have the potential to improve patients' benefits even if only few observations are available.
arXiv Detail & Related papers (2023-09-25T14:08:21Z) - A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors [56.554277096170246]
We present an empirical study that evaluates and contrasts four commonly employed annotation methods in user studies focused on in-the-wild data collection.
For both the user-driven, in situ annotations, where participants annotate their activities during the actual recording process, and the recall methods, where participants retrospectively annotate their data at the end of each day, the participants had the flexibility to select their own set of activity classes and corresponding labels.
arXiv Detail & Related papers (2023-05-15T16:02:56Z) - SPOT: Sequential Predictive Modeling of Clinical Trial Outcome with
Meta-Learning [67.8195828626489]
Clinical trials are essential to drug development but time-consuming, costly, and prone to failure.
We propose Sequential Predictive mOdeling of clinical Trial outcome (SPOT) that first identifies trial topics to cluster the multi-sourced trial data into relevant trial topics.
With the consideration of each trial sequence as a task, it uses a meta-learning strategy to achieve a point where the model can rapidly adapt to new tasks with minimal updates.
arXiv Detail & Related papers (2023-04-07T23:04:27Z) - Adaptive Identification of Populations with Treatment Benefit in
Clinical Trials: Machine Learning Challenges and Solutions [78.31410227443102]
We study the problem of adaptively identifying patient subpopulations that benefit from a given treatment during a confirmatory clinical trial.
We propose AdaGGI and AdaGCPI, two meta-algorithms for subpopulation construction.
arXiv Detail & Related papers (2022-08-11T14:27:49Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.