Predicting Affective States from Screen Text Sentiment
- URL: http://arxiv.org/abs/2408.12844v1
- Date: Fri, 23 Aug 2024 05:25:11 GMT
- Title: Predicting Affective States from Screen Text Sentiment
- Authors: Songyan Teng, Tianyi Zhang, Simon D'Alfonso, Vassilis Kostakos,
- Abstract summary: The potential of analysing the textual content viewed on smartphones to predict affective states remains underexplored.
We employed linear regression, zero-shot, and multi-shot prompting to analyse relationships between screen text and affective states.
Our findings indicate that multi-shot prompting substantially outperforms both linear regression and zero-shot prompting.
- Score: 11.375704805270171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The proliferation of mobile sensing technologies has enabled the study of various physiological and behavioural phenomena through unobtrusive data collection from smartphone sensors. This approach offers real-time insights into individuals' physical and mental states, creating opportunities for personalised treatment and interventions. However, the potential of analysing the textual content viewed on smartphones to predict affective states remains underexplored. To better understand how the screen text that users are exposed to and interact with can influence their affects, we investigated a subset of data obtained from a digital phenotyping study of Australian university students conducted in 2023. We employed linear regression, zero-shot, and multi-shot prompting using a large language model (LLM) to analyse relationships between screen text and affective states. Our findings indicate that multi-shot prompting substantially outperforms both linear regression and zero-shot prompting, highlighting the importance of context in affect prediction. We discuss the value of incorporating textual and sentiment data for improving affect prediction, providing a basis for future advancements in understanding smartphone use and wellbeing.
Related papers
- AWARE Narrator and the Utilization of Large Language Models to Extract Behavioral Insights from Smartphone Sensing Data [6.110013784860154]
Smartphones facilitate the tracking of health-related behaviors and contexts, contributing significantly to digital phenotyping.
We introduce a novel approach that systematically converts smartphone-collected data into structured, chronological narratives.
We apply the framework to the data collected from university students over a week, demonstrating the potential of utilizing the narratives to summarize individual behavior.
arXiv Detail & Related papers (2024-11-07T13:23:57Z) - Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Leveraging LLMs to Predict Affective States via Smartphone Sensor Features [6.1930355276269875]
Digital phenotyping involves collecting and analysing data from personal digital devices to infer behaviours and mental health.
The emergence of large language models (LLMs) offers a new approach to make sense of smartphone sensing data.
Our study aims to bridge this gap by employing LLMs to predict affect outcomes based on smartphone sensing data from university students.
arXiv Detail & Related papers (2024-07-11T07:37:52Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - When to generate hedges in peer-tutoring interactions [1.0466434989449724]
The study uses a naturalistic face-to-face dataset annotated for natural language turns, conversational strategies, tutoring strategies, and nonverbal behaviours.
Results show that embedding layers, that capture the semantic information of the previous turns, significantly improves the model's performance.
We discover that the eye gaze of both the tutor and the tutee has a significant impact on hedge prediction.
arXiv Detail & Related papers (2023-07-28T14:29:19Z) - REDAffectiveLM: Leveraging Affect Enriched Embedding and
Transformer-based Neural Language Model for Readers' Emotion Detection [3.6678641723285446]
We propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM.
We leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention.
arXiv Detail & Related papers (2023-01-21T19:28:25Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - A Brief Survey of Machine Learning Methods for Emotion Prediction using
Physiological Data [0.974672460306765]
This paper surveys machine learning methods that deploy smartphone and physiological data to predict emotions in real-time.
We showcase the variability of machine learning methods employed to achieve accurate emotion prediction.
The performance can be improved in future works by considering the following issues.
arXiv Detail & Related papers (2022-01-17T19:46:12Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.