Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data
- URL: http://arxiv.org/abs/2106.13213v1
- Date: Thu, 24 Jun 2021 17:46:03 GMT
- Title: Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data
- Authors: Paul Pu Liang, Terrance Liu, Anna Cai, Michal Muszynski, Ryo Ishii,
Nicholas Allen, Randy Auerbach, David Brent, Ruslan Salakhutdinov,
Louis-Philippe Morency
- Abstract summary: Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
- Score: 74.60507696087966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mental health conditions remain underdiagnosed even in countries with common
access to advanced medical care. The ability to accurately and efficiently
predict mood from easily collectible data has several important implications
for the early detection, intervention, and treatment of mental health
disorders. One promising data source to help monitor human behavior is daily
smartphone usage. However, care must be taken to summarize behaviors without
identifying the user through personal (e.g., personally identifiable
information) or protected (e.g., race, gender) attributes. In this paper, we
study behavioral markers of daily mood using a recent dataset of mobile
behaviors from adolescent populations at high risk of suicidal behaviors. Using
computational models, we find that language and multimodal representations of
mobile typed text (spanning typed characters, words, keystroke timings, and app
usage) are predictive of daily mood. However, we find that models trained to
predict mood often also capture private user identities in their intermediate
representations. To tackle this problem, we evaluate approaches that obfuscate
user identity while remaining predictive. By combining multimodal
representations with privacy-preserving learning, we are able to push forward
the performance-privacy frontier.
Related papers
- AWARE Narrator and the Utilization of Large Language Models to Extract Behavioral Insights from Smartphone Sensing Data [6.110013784860154]
Smartphones facilitate the tracking of health-related behaviors and contexts, contributing significantly to digital phenotyping.
We introduce a novel approach that systematically converts smartphone-collected data into structured, chronological narratives.
We apply the framework to the data collected from university students over a week, demonstrating the potential of utilizing the narratives to summarize individual behavior.
arXiv Detail & Related papers (2024-11-07T13:23:57Z) - Leveraging LLMs to Predict Affective States via Smartphone Sensor Features [6.1930355276269875]
Digital phenotyping involves collecting and analysing data from personal digital devices to infer behaviours and mental health.
The emergence of large language models (LLMs) offers a new approach to make sense of smartphone sensing data.
Our study aims to bridge this gap by employing LLMs to predict affect outcomes based on smartphone sensing data from university students.
arXiv Detail & Related papers (2024-07-11T07:37:52Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - Objective Prediction of Tomorrow's Affect Using Multi-Modal
Physiological Data and Personal Chronicles: A Study of Monitoring College
Student Well-being in 2020 [0.0]
The goal of our study was to investigate the capacity to more accurately predict affect through a fully automatic and objective approach using multiple commercial devices.
Longitudinal physiological data and daily assessments of emotions were collected from a sample of college students using smart wearables and phones for over a year.
Results showed that our model was able to predict next-day affect with accuracy comparable to state of the art methods.
arXiv Detail & Related papers (2022-01-26T23:06:20Z) - Covert Embodied Choice: Decision-Making and the Limits of Privacy Under
Biometric Surveillance [6.92628425870087]
We present results from a virtual reality task in which gaze, movement, and other physiological signals are tracked.
We find that while participants use a variety of strategies, data collected remains highly predictive of choice (80% accuracy).
A significant portion of participants became more predictable despite efforts to obfuscate, possibly indicating mistaken priors about the dynamics of algorithmic prediction.
arXiv Detail & Related papers (2021-01-04T04:45:22Z) - Multimodal Privacy-preserving Mood Prediction from Mobile Data: A
Preliminary Study [34.550824104906255]
Mental health conditions remain under-diagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is from daily smartphone usage.
We study behavioral markers or daily mood using a recent dataset of mobile behaviors from high-risk adolescent populations.
arXiv Detail & Related papers (2020-12-04T01:44:22Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - Anxiety Detection Leveraging Mobile Passive Sensing [53.11661460916551]
Anxiety disorders are the most common class of psychiatric problems affecting both children and adults.
Leveraging passive and unobtrusive data collection from smartphones could be a viable alternative to classical methods.
eWellness is an experimental mobile application designed to track a full-suite of sensor and user-log data off an individual's device in a continuous and passive manner.
arXiv Detail & Related papers (2020-08-09T20:22:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.