PersonalityScanner: Exploring the Validity of Personality Assessment Based on Multimodal Signals in Virtual Reality
- URL: http://arxiv.org/abs/2407.19728v1
- Date: Mon, 29 Jul 2024 06:17:41 GMT
- Title: PersonalityScanner: Exploring the Validity of Personality Assessment Based on Multimodal Signals in Virtual Reality
- Authors: Xintong Zhang, Di Lu, Huiqi Hu, Nan Jiang, Xianhao Yu, Jinan Xu, Yujia Peng, Qing Li, Wenjuan Han,
- Abstract summary: PersonalityScanner is a VR simulator to stimulate cognitive processes and simulate daily behaviors.
We collect a synchronous multi-modal dataset with ten modalities, including first/third-person video, audio, text, eye tracking, facial microexpression, pose, depth data, log, and inertial measurement unit.
- Score: 44.15145632980038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human cognition significantly influences expressed behavior and is intrinsically tied to authentic personality traits. Personality assessment plays a pivotal role in various fields, including psychology, education, social media, etc. However, traditional self-report questionnaires can only provide data based on what individuals are willing and able to disclose, thereby lacking objective. Moreover, automated measurements and peer assessments demand significant human effort and resources. In this paper, given the advantages of the Virtual Reality (VR) technique, we develop a VR simulator -- PersonalityScanner, to stimulate cognitive processes and simulate daily behaviors based on an immersive and interactive simulation environment, in which participants carry out a battery of engaging tasks that formulate a natural story of first-day at work. Through this simulator, we collect a synchronous multi-modal dataset with ten modalities, including first/third-person video, audio, text, eye tracking, facial microexpression, pose, depth data, log, and inertial measurement unit. By systematically examining the contributions of different modalities on revealing personality, we demonstrate the superior performance and effectiveness of PersonalityScanner.
Related papers
- PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Behavioural gap assessment of human-vehicle interaction in real and virtual reality-based scenarios in autonomous driving [7.588679613436823]
We present a first and innovative approach to evaluating what we term the behavioural gap, a concept that captures the disparity in a participant's conduct when engaging in a VR experiment compared to an equivalent real-world situation.
In the experiment, the pedestrian attempts to cross the road in the presence of different driving styles and an external Human-Machine Interface (eHMI)
Results show that participants are more cautious and curious in VR, affecting their speed and decisions, and that VR interfaces significantly influence their actions.
arXiv Detail & Related papers (2024-07-04T17:20:17Z) - Human Simulacra: Benchmarking the Personification of Large Language Models [38.21708264569801]
Large language models (LLMs) are recognized as systems that closely mimic aspects of human intelligence.
This paper introduces a framework for constructing virtual characters' life stories from the ground up.
Experimental results demonstrate that our constructed simulacra can produce personified responses that align with their target characters.
arXiv Detail & Related papers (2024-02-28T09:11:14Z) - Personality-aware Human-centric Multimodal Reasoning: A New Task,
Dataset and Baselines [32.82738983843281]
We introduce a new task called Personality-aware Human-centric Multimodal Reasoning (PHMR) (T1)
The goal of the task is to forecast the future behavior of a particular individual using multimodal information from past instances, while integrating personality factors.
The experimental results demonstrate that incorporating personality traits enhances human-centric multimodal reasoning performance.
arXiv Detail & Related papers (2023-04-05T09:09:10Z) - Facial Expression Recognition using Squeeze and Excitation-powered Swin
Transformers [0.0]
We propose a framework that employs Swin Vision Transformers (SwinT) and squeeze and excitation block (SE) to address vision tasks.
Our focus was to create an efficient FER model based on SwinT architecture that can recognize facial emotions using minimal data.
We trained our model on a hybrid dataset and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420.
arXiv Detail & Related papers (2023-01-26T02:29:17Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - BEHAVIOR: Benchmark for Everyday Household Activities in Virtual,
Interactive, and Ecological Environments [70.18430114842094]
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation.
These activities are designed to be realistic, diverse, and complex.
We include 500 human demonstrations in virtual reality (VR) to serve as the human ground truth.
arXiv Detail & Related papers (2021-08-06T23:36:23Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.