Transformers for prompt-level EMA non-response prediction
- URL: http://arxiv.org/abs/2111.01193v1
- Date: Mon, 1 Nov 2021 18:38:47 GMT
- Title: Transformers for prompt-level EMA non-response prediction
- Authors: Supriya Nagesh, Alexander Moreno, Stephanie M. Carpenter, Jamie Yap,
Soujanya Chatterjee, Steven Lloyd Lizotte, Neng Wan, Santosh Kumar, Cho Lam,
David W. Wetter, Inbal Nahum-Shani, James M. Rehg
- Abstract summary: Ecological Momentary Assessments (EMAs) are an important psychological data source for measuring cognitive states, affect, behavior, and environmental factors.
Non-response, in which participants fail to respond to EMA prompts, is an endemic problem.
The ability to accurately predict non-response could be utilized to improve EMA delivery and develop compliance interventions.
- Score: 62.41658786277712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ecological Momentary Assessments (EMAs) are an important psychological data
source for measuring current cognitive states, affect, behavior, and
environmental factors from participants in mobile health (mHealth) studies and
treatment programs. Non-response, in which participants fail to respond to EMA
prompts, is an endemic problem. The ability to accurately predict non-response
could be utilized to improve EMA delivery and develop compliance interventions.
Prior work has explored classical machine learning models for predicting
non-response. However, as increasingly large EMA datasets become available,
there is the potential to leverage deep learning models that have been
effective in other fields. Recently, transformer models have shown
state-of-the-art performance in NLP and other domains. This work is the first
to explore the use of transformers for EMA data analysis. We address three key
questions in applying transformers to EMA data: 1. Input representation, 2.
encoding temporal information, 3. utility of pre-training on improving
downstream prediction task performance. The transformer model achieves a
non-response prediction AUC of 0.77 and is significantly better than classical
ML and LSTM-based deep learning models. We will make our a predictive model
trained on a corpus of 40K EMA samples freely-available to the research
community, in order to facilitate the development of future transformer-based
EMA analysis works.
Related papers
- Development and Comparative Analysis of Machine Learning Models for Hypoxemia Severity Triage in CBRNE Emergency Scenarios Using Physiological and Demographic Data from Medical-Grade Devices [0.0]
Gradient Boosting Models (GBMs) outperformed sequential models in terms of training speed, interpretability, and reliability.
A 5-minute prediction window was chosen for timely intervention, with minute-levels standardizing the data.
This study highlights ML's potential to improve triage and reduce alarm fatigue.
arXiv Detail & Related papers (2024-10-30T23:24:28Z) - FaultFormer: Pretraining Transformers for Adaptable Bearing Fault Classification [7.136205674624813]
We present a novel self-supervised pretraining and fine-tuning framework based on transformer models.
In particular, we investigate different tokenization and data augmentation strategies to reach state-of-the-art accuracies.
This introduces a new paradigm where models can be pretrained on unlabeled data from different bearings, faults, and machinery and quickly deployed to new, data-scarce applications.
arXiv Detail & Related papers (2023-12-04T22:51:02Z) - Transfer Learning on Electromyography (EMG) Tasks: Approaches and Beyond [8.167024471353]
This survey aims to provide an insight into the biological foundations of existing transfer learning methods on EMG-related analysis.
We first introduce the physiological structure of the muscles and the EMG generating mechanism.
We categorize existing research endeavors into data based, model based, training scheme based, and adversarial based.
arXiv Detail & Related papers (2022-10-03T11:57:48Z) - Differentiable Agent-based Epidemiology [71.81552021144589]
We introduce GradABM: a scalable, differentiable design for agent-based modeling that is amenable to gradient-based learning with automatic differentiation.
GradABM can quickly simulate million-size populations in few seconds on commodity hardware, integrate with deep neural networks and ingest heterogeneous data sources.
arXiv Detail & Related papers (2022-07-20T07:32:02Z) - A Brief Survey of Machine Learning Methods for Emotion Prediction using
Physiological Data [0.974672460306765]
This paper surveys machine learning methods that deploy smartphone and physiological data to predict emotions in real-time.
We showcase the variability of machine learning methods employed to achieve accurate emotion prediction.
The performance can be improved in future works by considering the following issues.
arXiv Detail & Related papers (2022-01-17T19:46:12Z) - Pre-training and Fine-tuning Transformers for fMRI Prediction Tasks [69.85819388753579]
TFF employs a transformer-based architecture and a two-phase training approach.
Self-supervised training is applied to a collection of fMRI scans, where the model is trained for the reconstruction of 3D volume data.
Results show state-of-the-art performance on a variety of fMRI tasks, including age and gender prediction, as well as schizophrenia recognition.
arXiv Detail & Related papers (2021-12-10T18:04:26Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z) - Neural Transfer Learning with Transformers for Social Science Text
Analysis [0.0]
Transformer-based models for transfer learning have the potential to achieve higher prediction accuracies with relatively few training data instances.
This paper explains how these methods work, why they might be advantageous, and what their limitations are.
arXiv Detail & Related papers (2021-02-03T15:41:20Z) - Ensemble Transfer Learning for the Prediction of Anti-Cancer Drug
Response [49.86828302591469]
In this paper, we apply transfer learning to the prediction of anti-cancer drug response.
We apply the classic transfer learning framework that trains a prediction model on the source dataset and refines it on the target dataset.
The ensemble transfer learning pipeline is implemented using LightGBM and two deep neural network (DNN) models with different architectures.
arXiv Detail & Related papers (2020-05-13T20:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.