Leveraging LLMs to Predict Affective States via Smartphone Sensor Features
- URL: http://arxiv.org/abs/2407.08240v1
- Date: Thu, 11 Jul 2024 07:37:52 GMT
- Title: Leveraging LLMs to Predict Affective States via Smartphone Sensor Features
- Authors: Tianyi Zhang, Songyan Teng, Hong Jia, Simon D'Alfonso,
- Abstract summary: Digital phenotyping involves collecting and analysing data from personal digital devices to infer behaviours and mental health.
The emergence of large language models (LLMs) offers a new approach to make sense of smartphone sensing data.
Our study aims to bridge this gap by employing LLMs to predict affect outcomes based on smartphone sensing data from university students.
- Score: 6.1930355276269875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As mental health issues for young adults present a pressing public health concern, daily digital mood monitoring for early detection has become an important prospect. An active research area, digital phenotyping, involves collecting and analysing data from personal digital devices such as smartphones (usage and sensors) and wearables to infer behaviours and mental health. Whilst this data is standardly analysed using statistical and machine learning approaches, the emergence of large language models (LLMs) offers a new approach to make sense of smartphone sensing data. Despite their effectiveness across various domains, LLMs remain relatively unexplored in digital mental health, particularly in integrating mobile sensor data. Our study aims to bridge this gap by employing LLMs to predict affect outcomes based on smartphone sensing data from university students. We demonstrate the efficacy of zero-shot and few-shot embedding LLMs in inferring general wellbeing. Our findings reveal that LLMs can make promising predictions of affect measures using solely smartphone sensing data. This research sheds light on the potential of LLMs for affective state prediction, emphasizing the intricate link between smartphone behavioral patterns and affective states. To our knowledge, this is the first work to leverage LLMs for affective state prediction and digital phenotyping tasks.
Related papers
- AWARE Narrator and the Utilization of Large Language Models to Extract Behavioral Insights from Smartphone Sensing Data [6.110013784860154]
Smartphones facilitate the tracking of health-related behaviors and contexts, contributing significantly to digital phenotyping.
We introduce a novel approach that systematically converts smartphone-collected data into structured, chronological narratives.
We apply the framework to the data collected from university students over a week, demonstrating the potential of utilizing the narratives to summarize individual behavior.
arXiv Detail & Related papers (2024-11-07T13:23:57Z) - On-device Federated Learning in Smartphones for Detecting Depression from Reddit Posts [0.0]
Social media posts provide valuable information about individuals' mental health conditions.
In this study, we adopt Federated Learning (FL) to facilitate decentralized training on smartphones.
To optimize the training process, we leverage a common tokenizer across all client devices.
arXiv Detail & Related papers (2024-10-17T16:09:32Z) - Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Loneliness Forecasting Using Multi-modal Wearable and Mobile Sensing in Everyday Settings [1.7253972752874662]
This study employs wearable devices, such as smart rings and watches, to monitor early physiological indicators of loneliness.
smartphones are employed to capture initial behavioral signs of loneliness.
Through the development of personalized models, we achieved a notable accuracy of 0.82 and an F-1 score of 0.82 in forecasting loneliness levels seven days in advance.
arXiv Detail & Related papers (2024-09-15T18:33:02Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Predicting Affective States from Screen Text Sentiment [11.375704805270171]
The potential of analysing the textual content viewed on smartphones to predict affective states remains underexplored.
We employed linear regression, zero-shot, and multi-shot prompting to analyse relationships between screen text and affective states.
Our findings indicate that multi-shot prompting substantially outperforms both linear regression and zero-shot prompting.
arXiv Detail & Related papers (2024-08-23T05:25:11Z) - A Survey on Detection of LLMs-Generated Content [97.87912800179531]
The ability to detect LLMs-generated content has become of paramount importance.
We aim to provide a detailed overview of existing detection strategies and benchmarks.
We also posit the necessity for a multi-faceted approach to defend against various attacks.
arXiv Detail & Related papers (2023-10-24T09:10:26Z) - Redefining Digital Health Interfaces with Large Language Models [69.02059202720073]
Large Language Models (LLMs) have emerged as general-purpose models with the ability to process complex information.
We show how LLMs can provide a novel interface between clinicians and digital technologies.
We develop a new prognostic tool using automated machine learning.
arXiv Detail & Related papers (2023-10-05T14:18:40Z) - Objective Prediction of Tomorrow's Affect Using Multi-Modal
Physiological Data and Personal Chronicles: A Study of Monitoring College
Student Well-being in 2020 [0.0]
The goal of our study was to investigate the capacity to more accurately predict affect through a fully automatic and objective approach using multiple commercial devices.
Longitudinal physiological data and daily assessments of emotions were collected from a sample of college students using smart wearables and phones for over a year.
Results showed that our model was able to predict next-day affect with accuracy comparable to state of the art methods.
arXiv Detail & Related papers (2022-01-26T23:06:20Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.