Characterizing Hirability via Personality and Behavior
- URL: http://arxiv.org/abs/2006.12041v1
- Date: Mon, 22 Jun 2020 07:24:22 GMT
- Title: Characterizing Hirability via Personality and Behavior
- Authors: Harshit Malik, Hersh Dhillon, Roland Goecke, Ramanathan Subramanian
- Abstract summary: We examine relationships among personality and hirability measures on the emphFirst Impressions Candidate Screening dataset.
Modeling hirability as a discrete/continuous variable with the emphbig-five personality traits as predictors, we utilize (a) apparent personality annotations, and (b) personality estimates obtained via audio, visual and textual cues for hirability prediction (HP)
We also examine the efficacy of a two-step HP process involving (1) personality estimation from multimodal behavioral cues, followed by (2) HP from personality estimates.
- Score: 4.187572199323744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While personality traits have been extensively modeled as behavioral
constructs, we model \textbf{\textit{job hirability}} as a \emph{personality
construct}. On the {\emph{First Impressions Candidate Screening}} (FICS)
dataset, we examine relationships among personality and hirability measures.
Modeling hirability as a discrete/continuous variable with the \emph{big-five}
personality traits as predictors, we utilize (a) apparent personality
annotations, and (b) personality estimates obtained via audio, visual and
textual cues for hirability prediction (HP). We also examine the efficacy of a
two-step HP process involving (1) personality estimation from multimodal
behavioral cues, followed by (2) HP from personality estimates.
Interesting results from experiments performed on $\approx$~5000 FICS videos
are as follows. (1) For each of the \emph{text}, \emph{audio} and \emph{visual}
modalities, HP via the above two-step process is more effective than directly
predicting from behavioral cues. Superior results are achieved when hirability
is modeled as a continuous vis-\'a-vis categorical variable. (2) Among visual
cues, eye and bodily information achieve performance comparable to face cues
for predicting personality and hirability. (3) Explanatory analyses reveal the
impact of multimodal behavior on personality impressions; \eg,
Conscientiousness impressions are impacted by the use of \emph{cuss words}
(verbal behavior), and \emph{eye movements} (non-verbal behavior), confirming
prior observations.
Related papers
- Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues [63.936654900356004]
Personality recognition aims to identify the personality traits implied in user data such as dialogues and social media posts.
We propose a novel task named Explainable Personality Recognition, aiming to reveal the reasoning process as supporting evidence of the personality trait.
arXiv Detail & Related papers (2024-09-29T14:41:43Z) - EERPD: Leveraging Emotion and Emotion Regulation for Improving Personality Detection [19.98674724777821]
We propose a new personality detection method called EERPD.
This method introduces the use of emotion regulation, a psychological concept highly correlated with personality, for personality prediction.
Experimental results demonstrate that EERPD significantly enhances the accuracy and robustness of personality detection.
arXiv Detail & Related papers (2024-06-23T11:18:55Z) - Large Language Models Can Infer Personality from Free-Form User Interactions [0.0]
GPT-4 can infer personality with moderate accuracy, outperforming previous approaches.
Results show that the direct focus on personality assessment did not result in a less positive user experience.
Preliminary analyses suggest that the accuracy of personality inferences varies only marginally across different socio-demographic subgroups.
arXiv Detail & Related papers (2024-05-19T20:33:36Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Dataset Bias in Human Activity Recognition [57.91018542715725]
This contribution statistically curates the training data to assess to what degree the physical characteristics of humans influence HAR performance.
We evaluate the performance of a state-of-the-art convolutional neural network on two HAR datasets that vary in the sensors, activities, and recording for time-series HAR.
arXiv Detail & Related papers (2023-01-19T12:33:50Z) - An Open-source Benchmark of Deep Learning Models for Audio-visual
Apparent and Self-reported Personality Recognition [10.59440995582639]
Personality determines a wide variety of human daily and working behaviours, and is crucial for understanding human internal and external states.
In recent years, a large number of automatic personality computing approaches have been developed to predict either the apparent personality or self-reported personality of the subject based on non-verbal audio-visual behaviours.
In the absence of a standardized benchmark with consistent experimental settings, it is not only impossible to fairly compare the real performances of these personality computing models but also makes them difficult to be reproduced.
We present the first reproducible audio-visual benchmarking framework to provide a fair and consistent evaluation of eight existing personality computing models and
arXiv Detail & Related papers (2022-10-17T14:40:04Z) - Domain-specific Learning of Multi-scale Facial Dynamics for Apparent
Personality Traits Prediction [3.19935268158731]
We propose a novel video-based automatic personality traits recognition approach.
It consists of: (1) a textbfdomain-specific facial behavior modelling module that extracts personality-related multi-scale short-term human facial behavior features; (2) a textbflong-term behavior modelling module that summarizes all short-term features of a video as a long-term/video-level personality representation; and (3) a textbfmulti-task personality traits prediction module that models underlying relationship among all traits and jointly predict them based on the video-level personality representation.
arXiv Detail & Related papers (2022-09-09T07:08:55Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.