Self-supervised Pretraining and Transfer Learning Enable Flu and
COVID-19 Predictions in Small Mobile Sensing Datasets
- URL: http://arxiv.org/abs/2205.13607v1
- Date: Thu, 26 May 2022 20:23:55 GMT
- Title: Self-supervised Pretraining and Transfer Learning Enable Flu and
COVID-19 Predictions in Small Mobile Sensing Datasets
- Authors: Mike A. Merrill and Tim Althoff
- Abstract summary: Mobile sensing data offer unparalleled opportunity to quantify and act upon unmeasurable behavioral changes.
Unlike in natural language processing and computer vision, deep representation learning has yet to broadly impact this domain.
This is due to unique challenges in the behavioral health domain, including very small datasets.
- Score: 10.50818746268231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detailed mobile sensing data from phones, watches, and fitness trackers offer
an unparalleled opportunity to quantify and act upon previously unmeasurable
behavioral changes in order to improve individual health and accelerate
responses to emerging diseases. Unlike in natural language processing and
computer vision, deep representation learning has yet to broadly impact this
domain, in which the vast majority of research and clinical applications still
rely on manually defined features and boosted tree models or even forgo
predictive modeling altogether due to insufficient accuracy. This is due to
unique challenges in the behavioral health domain, including very small
datasets (~10^1 participants), which frequently contain missing data, consist
of long time series with critical long-range dependencies (length>10^4), and
extreme class imbalances (>10^3:1).
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Temporally Multi-Scale Sparse Self-Attention for Physical Activity Data Imputation [25.76458454501612]
We study the problem of imputation of missing step count data, one of the most ubiquitous forms of wearable sensor data.
We construct a novel and large scale data set consisting of a training set with over 3 million hourly step count observations and a test set with over 2.5 million hourly step count observations.
We propose a domain knowledge-informed sparse self-attention model for this task that captures the temporal multi-scale nature of step-count data.
arXiv Detail & Related papers (2024-06-27T02:38:25Z) - Integrating Wearable Sensor Data and Self-reported Diaries for Personalized Affect Forecasting [2.36325543943271]
We propose a multimodal deep learning model for affect status forecasting.
This model combines a transformer encoder with a pre-trained language model, facilitating the integrated analysis of objective metrics and self-reported diaries.
Our results demonstrate that the proposed model achieves predictive accuracy of 82.50% for positive affect and 82.76% for negative affect, a full week in advance.
arXiv Detail & Related papers (2024-03-16T17:24:38Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Transformer-Based Behavioral Representation Learning Enables Transfer
Learning for Mobile Sensing in Small Datasets [4.276883061502341]
We provide a neural architecture framework for mobile sensing data that can learn generalizable feature representations from time series.
This architecture combines benefits from CNN and Trans-former architectures to enable better prediction performance.
arXiv Detail & Related papers (2021-07-09T22:26:50Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Pre-training transformer-based framework on large-scale pediatric claims
data for downstream population-specific tasks [3.1580072841682734]
This study presents the Claim Pre-Training (Claim-PT) framework, a generic pre-training model that first trains on the entire pediatric claims dataset.
The effective knowledge transfer is completed through the task-aware fine-tuning stage.
We conducted experiments on a real-world claims dataset with more than one million patient records.
arXiv Detail & Related papers (2021-06-24T15:25:41Z) - Predicting Parkinson's Disease with Multimodal Irregularly Collected
Longitudinal Smartphone Data [75.23250968928578]
Parkinsons Disease is a neurological disorder and prevalent in elderly people.
Traditional ways to diagnose the disease rely on in-person subjective clinical evaluations on the quality of a set of activity tests.
We propose a novel time-series based approach to predicting Parkinson's Disease with raw activity test data collected by smartphones in the wild.
arXiv Detail & Related papers (2020-09-25T01:50:15Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - A Dynamic Deep Neural Network For Multimodal Clinical Data Analysis [12.02718865835448]
AdaptiveNet is a novel recurrent neural network architecture, which can deal with multiple lists of different events.
We employ the architecture to the problem of disease progression prediction in rheumatoid arthritis using the Swiss Clinical Quality Management registry.
arXiv Detail & Related papers (2020-08-14T11:19:32Z) - 1-D Convlutional Neural Networks for the Analysis of Pupil Size
Variations in Scotopic Conditions [79.71065005161566]
1-D convolutional neural network models are trained for classification of short-range sequences.
Model provides prediction with high average accuracy on a hold out test set.
arXiv Detail & Related papers (2020-02-06T17:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.