Characterizing Personality from Eye-Tracking: The Role of Gaze and Its Absence in Interactive Search Environments
- URL: http://arxiv.org/abs/2601.08287v1
- Date: Tue, 13 Jan 2026 07:24:39 GMT
- Title: Characterizing Personality from Eye-Tracking: The Role of Gaze and Its Absence in Interactive Search Environments
- Authors: Jiaman He, Marta Micheli, Damiano Spina, Dana McKay, Johanne R. Trippas, Noriko Kando,
- Abstract summary: This study aims to characterize personality traits through a multimodal time-series model.<n>We rely on raw gaze data from an eye tracker, minimizing preprocessing.<n>We trained models to predict personality traits using gaze signals.
- Score: 8.094997233445584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personality traits influence how individuals engage, behave, and make decisions during the information-seeking process. However, few studies have linked personality to observable search behaviors. This study aims to characterize personality traits through a multimodal time-series model that integrates eye-tracking data and gaze missingness-periods when the user's gaze is not captured. This approach is based on the idea that people often look away when they think, signaling disengagement or reflection. We conducted a user study with 25 participants, who used an interactive application on an iPad, allowing them to engage with digital artifacts from a museum. We rely on raw gaze data from an eye tracker, minimizing preprocessing so that behavioral patterns can be preserved without substantial data cleaning. From this perspective, we trained models to predict personality traits using gaze signals. Our results from a five-fold cross-validation study demonstrate strong predictive performance across all five dimensions: Neuroticism (Macro F1 = 77.69%), Conscientiousness (74.52%), Openness (77.52%), Agreeableness (73.09%), and Extraversion (76.69%). The ablation study examines whether the absence of gaze information affects the model performance, demonstrating that incorporating missingness improves multimodal time-series modeling. The full model, which integrates both time-series signals and missingness information, achieves 10-15% higher accuracy and macro F1 scores across all Big Five traits compared to the model without time-series signals and missingness. These findings provide evidence that personality can be inferred from search-related gaze behavior and demonstrate the value of incorporating missing gaze data into time-series multimodal modeling.
Related papers
- Enhancing Personality Recognition by Comparing the Predictive Power of Traits, Facets, and Nuances [37.83859643892549]
Personality recognition models aim to infer personality traits from different sources of behavioral data.<n>We trained a transformer-based model including cross-modal (audiovisual) and cross-subject (dyad-aware) attention mechanisms.<n>Results show that nuance-level models consistently outperform facet and trait-level models, reducing mean squared error by up to 74% across interaction scenarios.
arXiv Detail & Related papers (2026-02-05T13:35:04Z) - CHART-6: Human-Centered Evaluation of Data Visualization Understanding in Vision-Language Models [18.891323067948285]
It is unclear to what degree vision-language models emulate human behavior on tasks that involve reasoning about data visualizations.<n>Here we evaluated eight vision-language models on six data visualization literacy assessments designed for humans.<n>We found that these models performed worse than human participants on average.
arXiv Detail & Related papers (2025-05-22T18:15:04Z) - Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Stubborn Lexical Bias in Data and Models [50.79738900885665]
We use a new statistical method to examine whether spurious patterns in data appear in models trained on the data.
We apply an optimization approach to *reweight* the training data, reducing thousands of spurious correlations.
Surprisingly, though this method can successfully reduce lexical biases in the training data, we still find strong evidence of corresponding bias in the trained models.
arXiv Detail & Related papers (2023-06-03T20:12:27Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - Multi-modal Affect Analysis using standardized data within subjects in
the Wild [8.05417723395965]
We introduce the affective recognition method focusing on facial expression (EXP) and valence-arousal calculation.
Our proposed framework can improve estimation accuracy and robustness effectively.
arXiv Detail & Related papers (2021-07-07T04:18:28Z) - Visual Distant Supervision for Scene Graph Generation [66.10579690929623]
Scene graph models usually require supervised learning on large quantities of labeled data with intensive human annotation.
We propose visual distant supervision, a novel paradigm of visual relation learning, which can train scene graph models without any human-labeled data.
Comprehensive experimental results show that our distantly supervised model outperforms strong weakly supervised and semi-supervised baselines.
arXiv Detail & Related papers (2021-03-29T06:35:24Z) - Personality-Driven Gaze Animation with Conditional Generative
Adversarial Networks [0.24366811507669117]
We train the model using eye-tracking data and personality traits of 42 participants performing an everyday task.
Given the values of Big-Five personality traits, our model generates time series data consisting of gaze target, blinking times, and pupil dimensions.
We use the generated data to synthesize the gaze motion of virtual agents on a game engine.
arXiv Detail & Related papers (2020-11-11T00:31:45Z) - Muti-view Mouse Social Behaviour Recognition with Deep Graphical Model [124.26611454540813]
Social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases.
Because of the potential to create rich descriptions of mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention.
We propose a novel multiview latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures.
arXiv Detail & Related papers (2020-11-04T18:09:58Z) - A Multi-term and Multi-task Analyzing Framework for Affective Analysis
in-the-wild [0.2216657815393579]
We introduce the affective recognition method that was submitted to the Affective Behavior Analysis in-the-wild (ABAW) 2020 Contest.
Since affective behaviors have many observable features that have their own time frames, we introduced multiple optimized time windows.
We generated affective recognition models for each time window and ensembled these models together.
arXiv Detail & Related papers (2020-09-29T09:24:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.